Dec 2 01:43:59 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Dec 2 01:43:59 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Dec 2 01:43:59 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:59 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 2 01:43:59 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 2 01:43:59 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 2 01:43:59 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 2 01:43:59 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 2 01:43:59 localhost kernel: signal: max sigframe size: 1776 Dec 2 01:43:59 localhost kernel: BIOS-provided physical RAM map: Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 2 01:43:59 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Dec 2 01:43:59 localhost kernel: NX (Execute Disable) protection: active Dec 2 01:43:59 localhost kernel: SMBIOS 2.8 present. Dec 2 01:43:59 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 2 01:43:59 localhost kernel: Hypervisor detected: KVM Dec 2 01:43:59 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 2 01:43:59 localhost kernel: kvm-clock: using sched offset of 1907332051 cycles Dec 2 01:43:59 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 2 01:43:59 localhost kernel: tsc: Detected 2799.998 MHz processor Dec 2 01:43:59 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Dec 2 01:43:59 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 2 01:43:59 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Dec 2 01:43:59 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Dec 2 01:43:59 localhost kernel: Using GB pages for direct mapping Dec 2 01:43:59 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Dec 2 01:43:59 localhost kernel: ACPI: Early table checksum verification disabled Dec 2 01:43:59 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 2 01:43:59 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:59 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:59 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:59 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Dec 2 01:43:59 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:59 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 2 01:43:59 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Dec 2 01:43:59 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Dec 2 01:43:59 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Dec 2 01:43:59 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Dec 2 01:43:59 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Dec 2 01:43:59 localhost kernel: No NUMA configuration found Dec 2 01:43:59 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Dec 2 01:43:59 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Dec 2 01:43:59 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Dec 2 01:43:59 localhost kernel: Zone ranges: Dec 2 01:43:59 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 2 01:43:59 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 2 01:43:59 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 2 01:43:59 localhost kernel: Device empty Dec 2 01:43:59 localhost kernel: Movable zone start for each node Dec 2 01:43:59 localhost kernel: Early memory node ranges Dec 2 01:43:59 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 2 01:43:59 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Dec 2 01:43:59 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Dec 2 01:43:59 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Dec 2 01:43:59 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 2 01:43:59 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 2 01:43:59 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Dec 2 01:43:59 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Dec 2 01:43:59 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 2 01:43:59 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 2 01:43:59 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 2 01:43:59 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 2 01:43:59 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 2 01:43:59 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 2 01:43:59 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 2 01:43:59 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 2 01:43:59 localhost kernel: TSC deadline timer available Dec 2 01:43:59 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Dec 2 01:43:59 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Dec 2 01:43:59 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Dec 2 01:43:59 localhost kernel: Booting paravirtualized kernel on KVM Dec 2 01:43:59 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 2 01:43:59 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 2 01:43:59 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Dec 2 01:43:59 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Dec 2 01:43:59 localhost kernel: Fallback order for Node 0: 0 Dec 2 01:43:59 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Dec 2 01:43:59 localhost kernel: Policy zone: Normal Dec 2 01:43:59 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:59 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Dec 2 01:43:59 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 2 01:43:59 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 2 01:43:59 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 2 01:43:59 localhost kernel: software IO TLB: area num 8. Dec 2 01:43:59 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Dec 2 01:43:59 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Dec 2 01:43:59 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 2 01:43:59 localhost kernel: ftrace: allocating 44803 entries in 176 pages Dec 2 01:43:59 localhost kernel: ftrace: allocated 176 pages with 3 groups Dec 2 01:43:59 localhost kernel: Dynamic Preempt: voluntary Dec 2 01:43:59 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Dec 2 01:43:59 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Dec 2 01:43:59 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Dec 2 01:43:59 localhost kernel: #011Rude variant of Tasks RCU enabled. Dec 2 01:43:59 localhost kernel: #011Tracing variant of Tasks RCU enabled. Dec 2 01:43:59 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 2 01:43:59 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 2 01:43:59 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Dec 2 01:43:59 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 2 01:43:59 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Dec 2 01:43:59 localhost kernel: random: crng init done (trusting CPU's manufacturer) Dec 2 01:43:59 localhost kernel: Console: colour VGA+ 80x25 Dec 2 01:43:59 localhost kernel: printk: console [tty0] enabled Dec 2 01:43:59 localhost kernel: printk: console [ttyS0] enabled Dec 2 01:43:59 localhost kernel: ACPI: Core revision 20211217 Dec 2 01:43:59 localhost kernel: APIC: Switch to symmetric I/O mode setup Dec 2 01:43:59 localhost kernel: x2apic enabled Dec 2 01:43:59 localhost kernel: Switched APIC routing to physical x2apic. Dec 2 01:43:59 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 2 01:43:59 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 2 01:43:59 localhost kernel: pid_max: default: 32768 minimum: 301 Dec 2 01:43:59 localhost kernel: LSM: Security Framework initializing Dec 2 01:43:59 localhost kernel: Yama: becoming mindful. Dec 2 01:43:59 localhost kernel: SELinux: Initializing. Dec 2 01:43:59 localhost kernel: LSM support for eBPF active Dec 2 01:43:59 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 2 01:43:59 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 2 01:43:59 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 2 01:43:59 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 2 01:43:59 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 2 01:43:59 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 2 01:43:59 localhost kernel: Spectre V2 : Mitigation: Retpolines Dec 2 01:43:59 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 2 01:43:59 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 2 01:43:59 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 2 01:43:59 localhost kernel: RETBleed: Mitigation: untrained return thunk Dec 2 01:43:59 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 2 01:43:59 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 2 01:43:59 localhost kernel: Freeing SMP alternatives memory: 36K Dec 2 01:43:59 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 2 01:43:59 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Dec 2 01:43:59 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:59 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:59 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 2 01:43:59 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 2 01:43:59 localhost kernel: ... version: 0 Dec 2 01:43:59 localhost kernel: ... bit width: 48 Dec 2 01:43:59 localhost kernel: ... generic registers: 6 Dec 2 01:43:59 localhost kernel: ... value mask: 0000ffffffffffff Dec 2 01:43:59 localhost kernel: ... max period: 00007fffffffffff Dec 2 01:43:59 localhost kernel: ... fixed-purpose events: 0 Dec 2 01:43:59 localhost kernel: ... event mask: 000000000000003f Dec 2 01:43:59 localhost kernel: rcu: Hierarchical SRCU implementation. Dec 2 01:43:59 localhost kernel: rcu: #011Max phase no-delay instances is 400. Dec 2 01:43:59 localhost kernel: smp: Bringing up secondary CPUs ... Dec 2 01:43:59 localhost kernel: x86: Booting SMP configuration: Dec 2 01:43:59 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 2 01:43:59 localhost kernel: smp: Brought up 1 node, 8 CPUs Dec 2 01:43:59 localhost kernel: smpboot: Max logical packages: 8 Dec 2 01:43:59 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Dec 2 01:43:59 localhost kernel: node 0 deferred pages initialised in 26ms Dec 2 01:43:59 localhost kernel: devtmpfs: initialized Dec 2 01:43:59 localhost kernel: x86/mm: Memory block size: 128MB Dec 2 01:43:59 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 2 01:43:59 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 2 01:43:59 localhost kernel: pinctrl core: initialized pinctrl subsystem Dec 2 01:43:59 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 2 01:43:59 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 2 01:43:59 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 2 01:43:59 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 2 01:43:59 localhost kernel: audit: initializing netlink subsys (disabled) Dec 2 01:43:59 localhost kernel: audit: type=2000 audit(1764657838.654:1): state=initialized audit_enabled=0 res=1 Dec 2 01:43:59 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Dec 2 01:43:59 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 2 01:43:59 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Dec 2 01:43:59 localhost kernel: cpuidle: using governor menu Dec 2 01:43:59 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Dec 2 01:43:59 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 2 01:43:59 localhost kernel: PCI: Using configuration type 1 for base access Dec 2 01:43:59 localhost kernel: PCI: Using configuration type 1 for extended access Dec 2 01:43:59 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 2 01:43:59 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Dec 2 01:43:59 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 2 01:43:59 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 2 01:43:59 localhost kernel: cryptd: max_cpu_qlen set to 1000 Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Module Device) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Processor Device) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 2 01:43:59 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 2 01:43:59 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 2 01:43:59 localhost kernel: ACPI: Interpreter enabled Dec 2 01:43:59 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Dec 2 01:43:59 localhost kernel: ACPI: Using IOAPIC for interrupt routing Dec 2 01:43:59 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 2 01:43:59 localhost kernel: PCI: Using E820 reservations for host bridge windows Dec 2 01:43:59 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 2 01:43:59 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 2 01:43:59 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Dec 2 01:43:59 localhost kernel: acpiphp: Slot [3] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [4] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [5] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [6] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [7] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [8] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [9] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [10] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [11] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [12] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [13] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [14] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [15] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [16] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [17] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [18] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [19] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [20] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [21] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [22] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [23] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [24] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [25] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [26] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [27] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [28] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [29] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [30] registered Dec 2 01:43:59 localhost kernel: acpiphp: Slot [31] registered Dec 2 01:43:59 localhost kernel: PCI host bridge to bus 0000:00 Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 2 01:43:59 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 2 01:43:59 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 2 01:43:59 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 2 01:43:59 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 2 01:43:59 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 2 01:43:59 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 2 01:43:59 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 2 01:43:59 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 2 01:43:59 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 2 01:43:59 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 2 01:43:59 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 2 01:43:59 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 2 01:43:59 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Dec 2 01:43:59 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 2 01:43:59 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 2 01:43:59 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 2 01:43:59 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 2 01:43:59 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 2 01:43:59 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 2 01:43:59 localhost kernel: iommu: Default domain type: Translated Dec 2 01:43:59 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 2 01:43:59 localhost kernel: SCSI subsystem initialized Dec 2 01:43:59 localhost kernel: ACPI: bus type USB registered Dec 2 01:43:59 localhost kernel: usbcore: registered new interface driver usbfs Dec 2 01:43:59 localhost kernel: usbcore: registered new interface driver hub Dec 2 01:43:59 localhost kernel: usbcore: registered new device driver usb Dec 2 01:43:59 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Dec 2 01:43:59 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 2 01:43:59 localhost kernel: PTP clock support registered Dec 2 01:43:59 localhost kernel: EDAC MC: Ver: 3.0.0 Dec 2 01:43:59 localhost kernel: NetLabel: Initializing Dec 2 01:43:59 localhost kernel: NetLabel: domain hash size = 128 Dec 2 01:43:59 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Dec 2 01:43:59 localhost kernel: NetLabel: unlabeled traffic allowed by default Dec 2 01:43:59 localhost kernel: PCI: Using ACPI for IRQ routing Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 2 01:43:59 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 2 01:43:59 localhost kernel: vgaarb: loaded Dec 2 01:43:59 localhost kernel: clocksource: Switched to clocksource kvm-clock Dec 2 01:43:59 localhost kernel: VFS: Disk quotas dquot_6.6.0 Dec 2 01:43:59 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 2 01:43:59 localhost kernel: pnp: PnP ACPI init Dec 2 01:43:59 localhost kernel: pnp: PnP ACPI: found 5 devices Dec 2 01:43:59 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 2 01:43:59 localhost kernel: NET: Registered PF_INET protocol family Dec 2 01:43:59 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 2 01:43:59 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 2 01:43:59 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 2 01:43:59 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 2 01:43:59 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 2 01:43:59 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 2 01:43:59 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Dec 2 01:43:59 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 2 01:43:59 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 2 01:43:59 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 2 01:43:59 localhost kernel: NET: Registered PF_XDP protocol family Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Dec 2 01:43:59 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Dec 2 01:43:59 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 2 01:43:59 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 2 01:43:59 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 2 01:43:59 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29760 usecs Dec 2 01:43:59 localhost kernel: PCI: CLS 0 bytes, default 64 Dec 2 01:43:59 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 2 01:43:59 localhost kernel: Trying to unpack rootfs image as initramfs... Dec 2 01:43:59 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Dec 2 01:43:59 localhost kernel: ACPI: bus type thunderbolt registered Dec 2 01:43:59 localhost kernel: Initialise system trusted keyrings Dec 2 01:43:59 localhost kernel: Key type blacklist registered Dec 2 01:43:59 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Dec 2 01:43:59 localhost kernel: zbud: loaded Dec 2 01:43:59 localhost kernel: integrity: Platform Keyring initialized Dec 2 01:43:59 localhost kernel: NET: Registered PF_ALG protocol family Dec 2 01:43:59 localhost kernel: xor: automatically using best checksumming function avx Dec 2 01:43:59 localhost kernel: Key type asymmetric registered Dec 2 01:43:59 localhost kernel: Asymmetric key parser 'x509' registered Dec 2 01:43:59 localhost kernel: Running certificate verification selftests Dec 2 01:43:59 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Dec 2 01:43:59 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Dec 2 01:43:59 localhost kernel: io scheduler mq-deadline registered Dec 2 01:43:59 localhost kernel: io scheduler kyber registered Dec 2 01:43:59 localhost kernel: io scheduler bfq registered Dec 2 01:43:59 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Dec 2 01:43:59 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Dec 2 01:43:59 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Dec 2 01:43:59 localhost kernel: ACPI: button: Power Button [PWRF] Dec 2 01:43:59 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 2 01:43:59 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 2 01:43:59 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 2 01:43:59 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 2 01:43:59 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 2 01:43:59 localhost kernel: Non-volatile memory driver v1.3 Dec 2 01:43:59 localhost kernel: rdac: device handler registered Dec 2 01:43:59 localhost kernel: hp_sw: device handler registered Dec 2 01:43:59 localhost kernel: emc: device handler registered Dec 2 01:43:59 localhost kernel: alua: device handler registered Dec 2 01:43:59 localhost kernel: libphy: Fixed MDIO Bus: probed Dec 2 01:43:59 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Dec 2 01:43:59 localhost kernel: ehci-pci: EHCI PCI platform driver Dec 2 01:43:59 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Dec 2 01:43:59 localhost kernel: ohci-pci: OHCI PCI platform driver Dec 2 01:43:59 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Dec 2 01:43:59 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 2 01:43:59 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 2 01:43:59 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 2 01:43:59 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Dec 2 01:43:59 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Dec 2 01:43:59 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Dec 2 01:43:59 localhost kernel: usb usb1: Product: UHCI Host Controller Dec 2 01:43:59 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Dec 2 01:43:59 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Dec 2 01:43:59 localhost kernel: hub 1-0:1.0: USB hub found Dec 2 01:43:59 localhost kernel: hub 1-0:1.0: 2 ports detected Dec 2 01:43:59 localhost kernel: usbcore: registered new interface driver usbserial_generic Dec 2 01:43:59 localhost kernel: usbserial: USB Serial support registered for generic Dec 2 01:43:59 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 2 01:43:59 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 2 01:43:59 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 2 01:43:59 localhost kernel: mousedev: PS/2 mouse device common for all mice Dec 2 01:43:59 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 2 01:43:59 localhost kernel: rtc_cmos 00:04: registered as rtc0 Dec 2 01:43:59 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 2 01:43:59 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-02T06:43:58 UTC (1764657838) Dec 2 01:43:59 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Dec 2 01:43:59 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 2 01:43:59 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Dec 2 01:43:59 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Dec 2 01:43:59 localhost kernel: usbcore: registered new interface driver usbhid Dec 2 01:43:59 localhost kernel: usbhid: USB HID core driver Dec 2 01:43:59 localhost kernel: drop_monitor: Initializing network drop monitor service Dec 2 01:43:59 localhost kernel: Initializing XFRM netlink socket Dec 2 01:43:59 localhost kernel: NET: Registered PF_INET6 protocol family Dec 2 01:43:59 localhost kernel: Segment Routing with IPv6 Dec 2 01:43:59 localhost kernel: NET: Registered PF_PACKET protocol family Dec 2 01:43:59 localhost kernel: mpls_gso: MPLS GSO support Dec 2 01:43:59 localhost kernel: IPI shorthand broadcast: enabled Dec 2 01:43:59 localhost kernel: AVX2 version of gcm_enc/dec engaged. Dec 2 01:43:59 localhost kernel: AES CTR mode by8 optimization enabled Dec 2 01:43:59 localhost kernel: sched_clock: Marking stable (791422453, 183764266)->(1104752934, -129566215) Dec 2 01:43:59 localhost kernel: registered taskstats version 1 Dec 2 01:43:59 localhost kernel: Loading compiled-in X.509 certificates Dec 2 01:43:59 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 2 01:43:59 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Dec 2 01:43:59 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Dec 2 01:43:59 localhost kernel: zswap: loaded using pool lzo/zbud Dec 2 01:43:59 localhost kernel: page_owner is disabled Dec 2 01:43:59 localhost kernel: Key type big_key registered Dec 2 01:43:59 localhost kernel: Freeing initrd memory: 74232K Dec 2 01:43:59 localhost kernel: Key type encrypted registered Dec 2 01:43:59 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Dec 2 01:43:59 localhost kernel: Loading compiled-in module X.509 certificates Dec 2 01:43:59 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 2 01:43:59 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 2 01:43:59 localhost kernel: ima: Allocated hash algorithm: sha256 Dec 2 01:43:59 localhost kernel: ima: No architecture policies found Dec 2 01:43:59 localhost kernel: evm: Initialising EVM extended attributes: Dec 2 01:43:59 localhost kernel: evm: security.selinux Dec 2 01:43:59 localhost kernel: evm: security.SMACK64 (disabled) Dec 2 01:43:59 localhost kernel: evm: security.SMACK64EXEC (disabled) Dec 2 01:43:59 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Dec 2 01:43:59 localhost kernel: evm: security.SMACK64MMAP (disabled) Dec 2 01:43:59 localhost kernel: evm: security.apparmor (disabled) Dec 2 01:43:59 localhost kernel: evm: security.ima Dec 2 01:43:59 localhost kernel: evm: security.capability Dec 2 01:43:59 localhost kernel: evm: HMAC attrs: 0x1 Dec 2 01:43:59 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Dec 2 01:43:59 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Dec 2 01:43:59 localhost kernel: usb 1-1: Product: QEMU USB Tablet Dec 2 01:43:59 localhost kernel: usb 1-1: Manufacturer: QEMU Dec 2 01:43:59 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Dec 2 01:43:59 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Dec 2 01:43:59 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Dec 2 01:43:59 localhost kernel: Freeing unused decrypted memory: 2036K Dec 2 01:43:59 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Dec 2 01:43:59 localhost kernel: Write protecting the kernel read-only data: 26624k Dec 2 01:43:59 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 2 01:43:59 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Dec 2 01:43:59 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Dec 2 01:43:59 localhost kernel: Run /init as init process Dec 2 01:43:59 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 01:43:59 localhost systemd[1]: Detected virtualization kvm. Dec 2 01:43:59 localhost systemd[1]: Detected architecture x86-64. Dec 2 01:43:59 localhost systemd[1]: Running in initrd. Dec 2 01:43:59 localhost systemd[1]: No hostname configured, using default hostname. Dec 2 01:43:59 localhost systemd[1]: Hostname set to . Dec 2 01:43:59 localhost systemd[1]: Initializing machine ID from VM UUID. Dec 2 01:43:59 localhost systemd[1]: Queued start job for default target Initrd Default Target. Dec 2 01:43:59 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 2 01:43:59 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 2 01:43:59 localhost systemd[1]: Reached target Initrd /usr File System. Dec 2 01:43:59 localhost systemd[1]: Reached target Local File Systems. Dec 2 01:43:59 localhost systemd[1]: Reached target Path Units. Dec 2 01:43:59 localhost systemd[1]: Reached target Slice Units. Dec 2 01:43:59 localhost systemd[1]: Reached target Swaps. Dec 2 01:43:59 localhost systemd[1]: Reached target Timer Units. Dec 2 01:43:59 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 2 01:43:59 localhost systemd[1]: Listening on Journal Socket (/dev/log). Dec 2 01:43:59 localhost systemd[1]: Listening on Journal Socket. Dec 2 01:43:59 localhost systemd[1]: Listening on udev Control Socket. Dec 2 01:43:59 localhost systemd[1]: Listening on udev Kernel Socket. Dec 2 01:43:59 localhost systemd[1]: Reached target Socket Units. Dec 2 01:43:59 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 2 01:43:59 localhost systemd[1]: Starting Journal Service... Dec 2 01:43:59 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 01:43:59 localhost systemd[1]: Starting Create System Users... Dec 2 01:43:59 localhost systemd[1]: Starting Setup Virtual Console... Dec 2 01:43:59 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 2 01:43:59 localhost systemd-journald[284]: Journal started Dec 2 01:43:59 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/f041467c26d044b9832e8db5f9b7a49d) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:43:59 localhost systemd-modules-load[285]: Module 'msr' is built in Dec 2 01:43:59 localhost systemd[1]: Started Journal Service. Dec 2 01:43:59 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 01:43:59 localhost systemd[1]: Finished Setup Virtual Console. Dec 2 01:43:59 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Dec 2 01:43:59 localhost systemd[1]: Starting dracut cmdline hook... Dec 2 01:43:59 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 01:43:59 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Dec 2 01:43:59 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Dec 2 01:43:59 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Dec 2 01:43:59 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Dec 2 01:43:59 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 01:43:59 localhost systemd[1]: Finished Create System Users. Dec 2 01:43:59 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 2 01:43:59 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 2 01:43:59 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Dec 2 01:43:59 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 2 01:43:59 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 2 01:43:59 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 2 01:43:59 localhost systemd[1]: Finished dracut cmdline hook. Dec 2 01:43:59 localhost systemd[1]: Starting dracut pre-udev hook... Dec 2 01:43:59 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 2 01:43:59 localhost kernel: device-mapper: uevent: version 1.0.3 Dec 2 01:43:59 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Dec 2 01:43:59 localhost kernel: RPC: Registered named UNIX socket transport module. Dec 2 01:43:59 localhost kernel: RPC: Registered udp transport module. Dec 2 01:43:59 localhost kernel: RPC: Registered tcp transport module. Dec 2 01:43:59 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 2 01:43:59 localhost rpc.statd[409]: Version 2.5.4 starting Dec 2 01:43:59 localhost rpc.statd[409]: Initializing NSM state Dec 2 01:43:59 localhost rpc.idmapd[414]: Setting log level to 0 Dec 2 01:43:59 localhost systemd[1]: Finished dracut pre-udev hook. Dec 2 01:43:59 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 01:43:59 localhost systemd-udevd[427]: Using default interface naming scheme 'rhel-9.0'. Dec 2 01:43:59 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 01:43:59 localhost systemd[1]: Starting dracut pre-trigger hook... Dec 2 01:43:59 localhost systemd[1]: Finished dracut pre-trigger hook. Dec 2 01:43:59 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 2 01:43:59 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 2 01:43:59 localhost systemd[1]: Reached target System Initialization. Dec 2 01:43:59 localhost systemd[1]: Reached target Basic System. Dec 2 01:43:59 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 2 01:43:59 localhost systemd[1]: Reached target Network. Dec 2 01:43:59 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 2 01:43:59 localhost systemd[1]: Starting dracut initqueue hook... Dec 2 01:44:00 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Dec 2 01:44:00 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 2 01:44:00 localhost kernel: GPT:20971519 != 838860799 Dec 2 01:44:00 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Dec 2 01:44:00 localhost kernel: GPT:20971519 != 838860799 Dec 2 01:44:00 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 2 01:44:00 localhost kernel: vda: vda1 vda2 vda3 vda4 Dec 2 01:44:00 localhost kernel: scsi host0: ata_piix Dec 2 01:44:00 localhost kernel: scsi host1: ata_piix Dec 2 01:44:00 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Dec 2 01:44:00 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Dec 2 01:44:00 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 2 01:44:00 localhost systemd-udevd[464]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:44:00 localhost systemd[1]: Reached target Initrd Root Device. Dec 2 01:44:00 localhost kernel: ata1: found unknown device (class 0) Dec 2 01:44:00 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 2 01:44:00 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 2 01:44:00 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Dec 2 01:44:00 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 2 01:44:00 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 2 01:44:00 localhost systemd[1]: Finished dracut initqueue hook. Dec 2 01:44:00 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 2 01:44:00 localhost systemd[1]: Reached target Remote Encrypted Volumes. Dec 2 01:44:00 localhost systemd[1]: Reached target Remote File Systems. Dec 2 01:44:00 localhost systemd[1]: Starting dracut pre-mount hook... Dec 2 01:44:00 localhost systemd[1]: Finished dracut pre-mount hook. Dec 2 01:44:00 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Dec 2 01:44:00 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Dec 2 01:44:00 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 2 01:44:00 localhost systemd[1]: Mounting /sysroot... Dec 2 01:44:00 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Dec 2 01:44:00 localhost kernel: XFS (vda4): Mounting V5 Filesystem Dec 2 01:44:00 localhost kernel: XFS (vda4): Ending clean mount Dec 2 01:44:00 localhost systemd[1]: Mounted /sysroot. Dec 2 01:44:00 localhost systemd[1]: Reached target Initrd Root File System. Dec 2 01:44:00 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Dec 2 01:44:00 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Dec 2 01:44:00 localhost systemd[1]: Reached target Initrd File Systems. Dec 2 01:44:00 localhost systemd[1]: Reached target Initrd Default Target. Dec 2 01:44:00 localhost systemd[1]: Starting dracut mount hook... Dec 2 01:44:00 localhost systemd[1]: Finished dracut mount hook. Dec 2 01:44:00 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Dec 2 01:44:00 localhost rpc.idmapd[414]: exiting on signal 15 Dec 2 01:44:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Dec 2 01:44:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Dec 2 01:44:00 localhost systemd[1]: Stopped target Network. Dec 2 01:44:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Dec 2 01:44:00 localhost systemd[1]: Stopped target Timer Units. Dec 2 01:44:00 localhost systemd[1]: dbus.socket: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Dec 2 01:44:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Dec 2 01:44:00 localhost systemd[1]: Stopped target Initrd Default Target. Dec 2 01:44:00 localhost systemd[1]: Stopped target Basic System. Dec 2 01:44:00 localhost systemd[1]: Stopped target Initrd Root Device. Dec 2 01:44:00 localhost systemd[1]: Stopped target Initrd /usr File System. Dec 2 01:44:00 localhost systemd[1]: Stopped target Path Units. Dec 2 01:44:00 localhost systemd[1]: Stopped target Remote File Systems. Dec 2 01:44:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Dec 2 01:44:00 localhost systemd[1]: Stopped target Slice Units. Dec 2 01:44:00 localhost systemd[1]: Stopped target Socket Units. Dec 2 01:44:00 localhost systemd[1]: Stopped target System Initialization. Dec 2 01:44:00 localhost systemd[1]: Stopped target Local File Systems. Dec 2 01:44:00 localhost systemd[1]: Stopped target Swaps. Dec 2 01:44:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut mount hook. Dec 2 01:44:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut pre-mount hook. Dec 2 01:44:00 localhost systemd[1]: Stopped target Local Encrypted Volumes. Dec 2 01:44:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Dec 2 01:44:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut initqueue hook. Dec 2 01:44:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 01:44:00 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 01:44:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Create Volatile Files and Directories. Dec 2 01:44:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Coldplug All udev Devices. Dec 2 01:44:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut pre-trigger hook. Dec 2 01:44:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 2 01:44:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Setup Virtual Console. Dec 2 01:44:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 2 01:44:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Closed udev Control Socket. Dec 2 01:44:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Closed udev Kernel Socket. Dec 2 01:44:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut pre-udev hook. Dec 2 01:44:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped dracut cmdline hook. Dec 2 01:44:00 localhost systemd[1]: Starting Cleanup udev Database... Dec 2 01:44:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Dec 2 01:44:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Create List of Static Device Nodes. Dec 2 01:44:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Stopped Create System Users. Dec 2 01:44:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Dec 2 01:44:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 2 01:44:00 localhost systemd[1]: Finished Cleanup udev Database. Dec 2 01:44:00 localhost systemd[1]: Reached target Switch Root. Dec 2 01:44:00 localhost systemd[1]: Starting Switch Root... Dec 2 01:44:00 localhost systemd[1]: Switching root. Dec 2 01:44:01 localhost systemd-journald[284]: Journal stopped Dec 2 01:44:01 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Dec 2 01:44:01 localhost kernel: audit: type=1404 audit(1764657841.061:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 01:44:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 01:44:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 01:44:01 localhost kernel: audit: type=1403 audit(1764657841.183:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 2 01:44:01 localhost systemd[1]: Successfully loaded SELinux policy in 125.176ms. Dec 2 01:44:01 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.640ms. Dec 2 01:44:01 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 01:44:01 localhost systemd[1]: Detected virtualization kvm. Dec 2 01:44:01 localhost systemd[1]: Detected architecture x86-64. Dec 2 01:44:01 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 01:44:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 01:44:01 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd[1]: Stopped Switch Root. Dec 2 01:44:01 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 2 01:44:01 localhost systemd[1]: Created slice Slice /system/getty. Dec 2 01:44:01 localhost systemd[1]: Created slice Slice /system/modprobe. Dec 2 01:44:01 localhost systemd[1]: Created slice Slice /system/serial-getty. Dec 2 01:44:01 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Dec 2 01:44:01 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Dec 2 01:44:01 localhost systemd[1]: Created slice User and Session Slice. Dec 2 01:44:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 2 01:44:01 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Dec 2 01:44:01 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Dec 2 01:44:01 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 2 01:44:01 localhost systemd[1]: Stopped target Switch Root. Dec 2 01:44:01 localhost systemd[1]: Stopped target Initrd File Systems. Dec 2 01:44:01 localhost systemd[1]: Stopped target Initrd Root File System. Dec 2 01:44:01 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Dec 2 01:44:01 localhost systemd[1]: Reached target Path Units. Dec 2 01:44:01 localhost systemd[1]: Reached target rpc_pipefs.target. Dec 2 01:44:01 localhost systemd[1]: Reached target Slice Units. Dec 2 01:44:01 localhost systemd[1]: Reached target Swaps. Dec 2 01:44:01 localhost systemd[1]: Reached target Local Verity Protected Volumes. Dec 2 01:44:01 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Dec 2 01:44:01 localhost systemd[1]: Reached target RPC Port Mapper. Dec 2 01:44:01 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 2 01:44:01 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Dec 2 01:44:01 localhost systemd[1]: Listening on udev Control Socket. Dec 2 01:44:01 localhost systemd[1]: Listening on udev Kernel Socket. Dec 2 01:44:01 localhost systemd[1]: Mounting Huge Pages File System... Dec 2 01:44:01 localhost systemd[1]: Mounting POSIX Message Queue File System... Dec 2 01:44:01 localhost systemd[1]: Mounting Kernel Debug File System... Dec 2 01:44:01 localhost systemd[1]: Mounting Kernel Trace File System... Dec 2 01:44:01 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 2 01:44:01 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 2 01:44:01 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 2 01:44:01 localhost systemd[1]: Starting Load Kernel Module drm... Dec 2 01:44:01 localhost systemd[1]: Starting Load Kernel Module fuse... Dec 2 01:44:01 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Dec 2 01:44:01 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd[1]: Stopped File System Check on Root Device. Dec 2 01:44:01 localhost systemd[1]: Stopped Journal Service. Dec 2 01:44:01 localhost kernel: fuse: init (API version 7.36) Dec 2 01:44:01 localhost systemd[1]: Starting Journal Service... Dec 2 01:44:01 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 01:44:01 localhost systemd[1]: Starting Generate network units from Kernel command line... Dec 2 01:44:01 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Dec 2 01:44:01 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Dec 2 01:44:01 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 2 01:44:01 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Dec 2 01:44:01 localhost systemd[1]: Mounted Huge Pages File System. Dec 2 01:44:01 localhost systemd-journald[619]: Journal started Dec 2 01:44:01 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:44:01 localhost systemd[1]: Queued start job for default target Multi-User System. Dec 2 01:44:01 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd-modules-load[620]: Module 'msr' is built in Dec 2 01:44:01 localhost systemd[1]: Started Journal Service. Dec 2 01:44:01 localhost systemd[1]: Mounted POSIX Message Queue File System. Dec 2 01:44:01 localhost systemd[1]: Mounted Kernel Debug File System. Dec 2 01:44:01 localhost systemd[1]: Mounted Kernel Trace File System. Dec 2 01:44:01 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 2 01:44:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 2 01:44:01 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd[1]: Finished Load Kernel Module fuse. Dec 2 01:44:01 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Dec 2 01:44:01 localhost kernel: ACPI: bus type drm_connector registered Dec 2 01:44:01 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 2 01:44:01 localhost systemd[1]: Finished Load Kernel Module drm. Dec 2 01:44:01 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 01:44:01 localhost systemd[1]: Finished Generate network units from Kernel command line. Dec 2 01:44:01 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Dec 2 01:44:01 localhost systemd[1]: Mounting FUSE Control File System... Dec 2 01:44:01 localhost systemd[1]: Mounting Kernel Configuration File System... Dec 2 01:44:01 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 2 01:44:01 localhost systemd[1]: Starting Rebuild Hardware Database... Dec 2 01:44:01 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Dec 2 01:44:01 localhost systemd[1]: Starting Load/Save Random Seed... Dec 2 01:44:01 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free. Dec 2 01:44:01 localhost systemd-journald[619]: Received client request to flush runtime journal. Dec 2 01:44:01 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 01:44:01 localhost systemd[1]: Starting Create System Users... Dec 2 01:44:01 localhost systemd[1]: Mounted FUSE Control File System. Dec 2 01:44:01 localhost systemd[1]: Mounted Kernel Configuration File System. Dec 2 01:44:01 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 2 01:44:01 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Dec 2 01:44:01 localhost systemd[1]: Finished Load/Save Random Seed. Dec 2 01:44:01 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 2 01:44:01 localhost systemd-sysusers[633]: Creating group 'sgx' with GID 989. Dec 2 01:44:01 localhost systemd-sysusers[633]: Creating group 'systemd-oom' with GID 988. Dec 2 01:44:01 localhost systemd-sysusers[633]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Dec 2 01:44:01 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 01:44:01 localhost systemd[1]: Finished Create System Users. Dec 2 01:44:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 2 01:44:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 2 01:44:01 localhost systemd[1]: Reached target Preparation for Local File Systems. Dec 2 01:44:01 localhost systemd[1]: Set up automount EFI System Partition Automount. Dec 2 01:44:02 localhost systemd[1]: Finished Rebuild Hardware Database. Dec 2 01:44:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 01:44:02 localhost systemd-udevd[637]: Using default interface naming scheme 'rhel-9.0'. Dec 2 01:44:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 01:44:02 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 2 01:44:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 2 01:44:02 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 2 01:44:02 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Dec 2 01:44:02 localhost systemd-udevd[645]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:44:02 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Dec 2 01:44:02 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Dec 2 01:44:02 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Dec 2 01:44:02 localhost systemd[1]: Mounting /boot... Dec 2 01:44:02 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31) Dec 2 01:44:02 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters Dec 2 01:44:02 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Dec 2 01:44:02 localhost kernel: XFS (vda3): Mounting V5 Filesystem Dec 2 01:44:02 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Dec 2 01:44:02 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 2 01:44:02 localhost kernel: XFS (vda3): Ending clean mount Dec 2 01:44:02 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Dec 2 01:44:02 localhost systemd[1]: Mounted /boot. Dec 2 01:44:02 localhost systemd[1]: Mounting /boot/efi... Dec 2 01:44:02 localhost systemd[1]: Mounted /boot/efi. Dec 2 01:44:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 2 01:44:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 2 01:44:02 localhost systemd[1]: Reached target Local File Systems. Dec 2 01:44:02 localhost kernel: Console: switching to colour dummy device 80x25 Dec 2 01:44:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 2 01:44:02 localhost kernel: [drm] features: -context_init Dec 2 01:44:02 localhost kernel: [drm] number of scanouts: 1 Dec 2 01:44:02 localhost kernel: [drm] number of cap sets: 0 Dec 2 01:44:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Dec 2 01:44:02 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Dec 2 01:44:02 localhost kernel: Console: switching to colour frame buffer device 128x48 Dec 2 01:44:02 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Dec 2 01:44:02 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 2 01:44:02 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Dec 2 01:44:02 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 2 01:44:02 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:02 localhost systemd[1]: Starting Automatic Boot Loader Update... Dec 2 01:44:02 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Dec 2 01:44:02 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 2 01:44:02 localhost kernel: SVM: TSC scaling supported Dec 2 01:44:02 localhost kernel: kvm: Nested Virtualization enabled Dec 2 01:44:02 localhost kernel: SVM: kvm: Nested Paging enabled Dec 2 01:44:02 localhost kernel: SVM: LBR virtualization supported Dec 2 01:44:02 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 702 (bootctl) Dec 2 01:44:02 localhost systemd[1]: Starting File System Check on /dev/vda2... Dec 2 01:44:02 localhost systemd[1]: Finished File System Check on /dev/vda2. Dec 2 01:44:02 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 2 01:44:02 localhost systemd[1]: Starting Security Auditing Service... Dec 2 01:44:02 localhost systemd[1]: Starting RPC Bind... Dec 2 01:44:02 localhost systemd[1]: Starting Rebuild Journal Catalog... Dec 2 01:44:02 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Dec 2 01:44:02 localhost auditd[710]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Dec 2 01:44:02 localhost auditd[710]: Init complete, auditd 3.0.7 listening for events (startup state enable) Dec 2 01:44:02 localhost systemd[1]: Finished Rebuild Journal Catalog. Dec 2 01:44:02 localhost systemd[1]: Started RPC Bind. Dec 2 01:44:02 localhost systemd[1]: Mounting EFI System Partition Automount... Dec 2 01:44:02 localhost augenrules[715]: /sbin/augenrules: No change Dec 2 01:44:02 localhost augenrules[730]: No rules Dec 2 01:44:02 localhost augenrules[730]: enabled 1 Dec 2 01:44:02 localhost augenrules[730]: failure 1 Dec 2 01:44:02 localhost augenrules[730]: pid 710 Dec 2 01:44:02 localhost augenrules[730]: rate_limit 0 Dec 2 01:44:02 localhost augenrules[730]: backlog_limit 8192 Dec 2 01:44:02 localhost augenrules[730]: lost 0 Dec 2 01:44:02 localhost augenrules[730]: backlog 1 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time 60000 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time_actual 0 Dec 2 01:44:02 localhost augenrules[730]: enabled 1 Dec 2 01:44:02 localhost augenrules[730]: failure 1 Dec 2 01:44:02 localhost augenrules[730]: pid 710 Dec 2 01:44:02 localhost augenrules[730]: rate_limit 0 Dec 2 01:44:02 localhost augenrules[730]: backlog_limit 8192 Dec 2 01:44:02 localhost augenrules[730]: lost 0 Dec 2 01:44:02 localhost augenrules[730]: backlog 4 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time 60000 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time_actual 0 Dec 2 01:44:02 localhost augenrules[730]: enabled 1 Dec 2 01:44:02 localhost augenrules[730]: failure 1 Dec 2 01:44:02 localhost augenrules[730]: pid 710 Dec 2 01:44:02 localhost augenrules[730]: rate_limit 0 Dec 2 01:44:02 localhost augenrules[730]: backlog_limit 8192 Dec 2 01:44:02 localhost augenrules[730]: lost 0 Dec 2 01:44:02 localhost augenrules[730]: backlog 1 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time 60000 Dec 2 01:44:02 localhost augenrules[730]: backlog_wait_time_actual 0 Dec 2 01:44:02 localhost systemd[1]: Mounted EFI System Partition Automount. Dec 2 01:44:02 localhost systemd[1]: Started Security Auditing Service. Dec 2 01:44:02 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Dec 2 01:44:02 localhost systemd[1]: Finished Automatic Boot Loader Update. Dec 2 01:44:02 localhost systemd[1]: Starting Update is Completed... Dec 2 01:44:02 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Dec 2 01:44:02 localhost systemd[1]: Finished Update is Completed. Dec 2 01:44:02 localhost systemd[1]: Reached target System Initialization. Dec 2 01:44:02 localhost systemd[1]: Started dnf makecache --timer. Dec 2 01:44:02 localhost systemd[1]: Started Daily rotation of log files. Dec 2 01:44:02 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 2 01:44:02 localhost systemd[1]: Reached target Timer Units. Dec 2 01:44:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 2 01:44:02 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Dec 2 01:44:02 localhost systemd[1]: Reached target Socket Units. Dec 2 01:44:02 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Dec 2 01:44:02 localhost systemd[1]: Starting D-Bus System Message Bus... Dec 2 01:44:02 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:02 localhost systemd[1]: Started D-Bus System Message Bus. Dec 2 01:44:02 localhost systemd[1]: Reached target Basic System. Dec 2 01:44:02 localhost journal[742]: Ready Dec 2 01:44:02 localhost systemd[1]: Starting NTP client/server... Dec 2 01:44:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Dec 2 01:44:02 localhost systemd[1]: Started irqbalance daemon. Dec 2 01:44:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Dec 2 01:44:02 localhost systemd[1]: Starting System Logging Service... Dec 2 01:44:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 01:44:02 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 01:44:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Dec 2 01:44:02 localhost systemd[1]: Reached target User and Group Name Lookups. Dec 2 01:44:02 localhost systemd[1]: Starting User Login Management... Dec 2 01:44:02 localhost systemd[1]: Started System Logging Service. Dec 2 01:44:02 localhost rsyslogd[754]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="754" x-info="https://www.rsyslog.com"] start Dec 2 01:44:02 localhost rsyslogd[754]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Dec 2 01:44:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Dec 2 01:44:02 localhost chronyd[763]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 01:44:02 localhost chronyd[763]: Using right/UTC timezone to obtain leap second data Dec 2 01:44:02 localhost chronyd[763]: Loaded seccomp filter (level 2) Dec 2 01:44:02 localhost systemd[1]: Started NTP client/server. Dec 2 01:44:02 localhost systemd-logind[757]: New seat seat0. Dec 2 01:44:02 localhost systemd-logind[757]: Watching system buttons on /dev/input/event0 (Power Button) Dec 2 01:44:02 localhost systemd-logind[757]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 2 01:44:02 localhost systemd[1]: Started User Login Management. Dec 2 01:44:02 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 01:44:03 localhost cloud-init[771]: Cloud-init v. 22.1-9.el9 running 'init-local' at Tue, 02 Dec 2025 06:44:03 +0000. Up 5.35 seconds. Dec 2 01:44:03 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpibe937xh.mount: Deactivated successfully. Dec 2 01:44:03 localhost systemd[1]: Starting Hostname Service... Dec 2 01:44:03 localhost systemd[1]: Started Hostname Service. Dec 2 01:44:03 localhost systemd-hostnamed[785]: Hostname set to (static) Dec 2 01:44:03 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Dec 2 01:44:03 localhost systemd[1]: Reached target Preparation for Network. Dec 2 01:44:03 localhost systemd[1]: Starting Network Manager... Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5670] NetworkManager (version 1.42.2-1.el9) is starting... (boot:15f9a460-af10-408f-9e3d-85f564c0683d) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5676] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 2 01:44:03 localhost systemd[1]: Started Network Manager. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5700] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 2 01:44:03 localhost systemd[1]: Reached target Network. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5744] manager[0x5605b46de020]: monitoring kernel firmware directory '/lib/firmware'. Dec 2 01:44:03 localhost systemd[1]: Starting Network Manager Wait Online... Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5790] hostname: hostname: using hostnamed Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5791] hostname: static hostname changed from (none) to "np0005541913.novalocal" Dec 2 01:44:03 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5798] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 2 01:44:03 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Dec 2 01:44:03 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5922] manager[0x5605b46de020]: rfkill: Wi-Fi hardware radio set enabled Dec 2 01:44:03 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5928] manager[0x5605b46de020]: rfkill: WWAN hardware radio set enabled Dec 2 01:44:03 localhost systemd[1]: Started GSSAPI Proxy Daemon. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5993] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5994] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5997] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.5998] manager: Networking is enabled by state file Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6014] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6015] settings: Loaded settings plugin: keyfile (internal) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6044] dhcp: init: Using DHCP client 'internal' Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6052] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6065] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6071] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6079] device (lo): Activation: starting connection 'lo' (013d3e5c-fa64-43d6-9ac0-206896105ec9) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6090] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6095] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 2 01:44:03 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6126] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6140] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6142] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6145] device (eth0): carrier: link connected Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6150] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6156] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 2 01:44:03 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6192] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6197] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6197] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6199] manager: NetworkManager state is now CONNECTING Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6206] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6212] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6215] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:44:03 localhost systemd[1]: Reached target NFS client services. Dec 2 01:44:03 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6283] dhcp4 (eth0): state changed new lease, address=38.102.83.144 Dec 2 01:44:03 localhost systemd[1]: Reached target Remote File Systems. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6292] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 2 01:44:03 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 2 01:44:03 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6333] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6338] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6339] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6346] device (lo): Activation: successful, device activated. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6353] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6355] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6357] manager: NetworkManager state is now CONNECTED_SITE Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6358] device (eth0): Activation: successful, device activated. Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6363] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 2 01:44:03 localhost NetworkManager[790]: [1764657843.6365] manager: startup complete Dec 2 01:44:03 localhost systemd[1]: Finished Network Manager Wait Online. Dec 2 01:44:03 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Dec 2 01:44:03 localhost cloud-init[957]: Cloud-init v. 22.1-9.el9 running 'init' at Tue, 02 Dec 2025 06:44:03 +0000. Up 6.06 seconds. Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | eth0 | True | 38.102.83.144 | 255.255.255.0 | global | fa:16:3e:3f:40:cc | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | eth0 | True | fe80::f816:3eff:fe3f:40cc/64 | . | link | fa:16:3e:3f:40:cc | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | lo | True | ::1/128 | . | host | . | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | Route | Destination | Gateway | Interface | Flags | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Dec 2 01:44:03 localhost cloud-init[957]: ci-info: | 3 | multicast | :: | eth0 | U | Dec 2 01:44:04 localhost cloud-init[957]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 2 01:44:04 localhost systemd[1]: Starting Authorization Manager... Dec 2 01:44:04 localhost polkitd[1037]: Started polkitd version 0.117 Dec 2 01:44:04 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 01:44:04 localhost systemd[1]: Started Authorization Manager. Dec 2 01:44:07 localhost cloud-init[957]: Generating public/private rsa key pair. Dec 2 01:44:07 localhost cloud-init[957]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Dec 2 01:44:07 localhost cloud-init[957]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Dec 2 01:44:07 localhost cloud-init[957]: The key fingerprint is: Dec 2 01:44:07 localhost cloud-init[957]: SHA256:PnFcFMPZH676c2vQCBk+PI38fOzVY44G0MSIBRfxQ4s root@np0005541913.novalocal Dec 2 01:44:07 localhost cloud-init[957]: The key's randomart image is: Dec 2 01:44:07 localhost cloud-init[957]: +---[RSA 3072]----+ Dec 2 01:44:07 localhost cloud-init[957]: | .+==o++ | Dec 2 01:44:07 localhost cloud-init[957]: | ...+=+... | Dec 2 01:44:07 localhost cloud-init[957]: | E*+* ...| Dec 2 01:44:07 localhost cloud-init[957]: | o %.. ..| Dec 2 01:44:07 localhost cloud-init[957]: | S + * = .| Dec 2 01:44:07 localhost cloud-init[957]: | . o . * *o| Dec 2 01:44:07 localhost cloud-init[957]: | o o B o| Dec 2 01:44:07 localhost cloud-init[957]: | . . + = | Dec 2 01:44:07 localhost cloud-init[957]: | o.+..| Dec 2 01:44:07 localhost cloud-init[957]: +----[SHA256]-----+ Dec 2 01:44:07 localhost cloud-init[957]: Generating public/private ecdsa key pair. Dec 2 01:44:07 localhost cloud-init[957]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Dec 2 01:44:07 localhost cloud-init[957]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Dec 2 01:44:07 localhost cloud-init[957]: The key fingerprint is: Dec 2 01:44:07 localhost cloud-init[957]: SHA256:6iMR4dY6LoVZ8ymMJCbx8nHStl9AiNZS/eSMUs4v7Xg root@np0005541913.novalocal Dec 2 01:44:07 localhost cloud-init[957]: The key's randomart image is: Dec 2 01:44:07 localhost cloud-init[957]: +---[ECDSA 256]---+ Dec 2 01:44:07 localhost cloud-init[957]: | +.o | Dec 2 01:44:07 localhost cloud-init[957]: |. + + + . | Dec 2 01:44:07 localhost cloud-init[957]: | + + B * | Dec 2 01:44:07 localhost cloud-init[957]: |+.= @ * + | Dec 2 01:44:07 localhost cloud-init[957]: |o= % B =S | Dec 2 01:44:07 localhost cloud-init[957]: | = O +.+ | Dec 2 01:44:07 localhost cloud-init[957]: | o =.= | Dec 2 01:44:07 localhost cloud-init[957]: | . o.+ E | Dec 2 01:44:07 localhost cloud-init[957]: | . ..o | Dec 2 01:44:07 localhost cloud-init[957]: +----[SHA256]-----+ Dec 2 01:44:07 localhost cloud-init[957]: Generating public/private ed25519 key pair. Dec 2 01:44:07 localhost cloud-init[957]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Dec 2 01:44:07 localhost cloud-init[957]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Dec 2 01:44:07 localhost cloud-init[957]: The key fingerprint is: Dec 2 01:44:07 localhost cloud-init[957]: SHA256:aGQRCWtPRCvEgwMYFOhpjkXRq2xRH4nHoTwtqSvGbEw root@np0005541913.novalocal Dec 2 01:44:07 localhost cloud-init[957]: The key's randomart image is: Dec 2 01:44:07 localhost cloud-init[957]: +--[ED25519 256]--+ Dec 2 01:44:07 localhost cloud-init[957]: |*=o=o=*= | Dec 2 01:44:07 localhost cloud-init[957]: |o +o***o | Dec 2 01:44:07 localhost cloud-init[957]: |...oX=*. | Dec 2 01:44:07 localhost cloud-init[957]: | +oo.O.. | Dec 2 01:44:07 localhost cloud-init[957]: |+E.o + S | Dec 2 01:44:07 localhost cloud-init[957]: |*.+. . | Dec 2 01:44:07 localhost cloud-init[957]: |.B. | Dec 2 01:44:07 localhost cloud-init[957]: |o. | Dec 2 01:44:07 localhost cloud-init[957]: | | Dec 2 01:44:07 localhost cloud-init[957]: +----[SHA256]-----+ Dec 2 01:44:07 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Dec 2 01:44:07 localhost systemd[1]: Reached target Cloud-config availability. Dec 2 01:44:07 localhost systemd[1]: Reached target Network is Online. Dec 2 01:44:07 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Dec 2 01:44:07 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Dec 2 01:44:07 localhost systemd[1]: Starting Crash recovery kernel arming... Dec 2 01:44:07 localhost systemd[1]: Starting Notify NFS peers of a restart... Dec 2 01:44:07 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 01:44:07 localhost systemd[1]: Starting Permit User Sessions... Dec 2 01:44:07 localhost sm-notify[1129]: Version 2.5.4 starting Dec 2 01:44:07 localhost systemd[1]: Started Notify NFS peers of a restart. Dec 2 01:44:07 localhost sshd[1130]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:07 localhost systemd[1]: Finished Permit User Sessions. Dec 2 01:44:07 localhost systemd[1]: Started Command Scheduler. Dec 2 01:44:07 localhost systemd[1]: Started Getty on tty1. Dec 2 01:44:07 localhost systemd[1]: Started Serial Getty on ttyS0. Dec 2 01:44:07 localhost systemd[1]: Reached target Login Prompts. Dec 2 01:44:07 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 01:44:07 localhost systemd[1]: Reached target Multi-User System. Dec 2 01:44:07 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Dec 2 01:44:07 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 2 01:44:07 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Dec 2 01:44:08 localhost kdumpctl[1133]: kdump: No kdump initial ramdisk found. Dec 2 01:44:08 localhost kdumpctl[1133]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Dec 2 01:44:08 localhost cloud-init[1255]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Tue, 02 Dec 2025 06:44:08 +0000. Up 10.33 seconds. Dec 2 01:44:08 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Dec 2 01:44:08 localhost systemd[1]: Starting Execute cloud user/final scripts... Dec 2 01:44:08 localhost sshd[1413]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost dracut[1417]: dracut-057-21.git20230214.el9 Dec 2 01:44:08 localhost sshd[1419]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost sshd[1435]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost chronyd[763]: Selected source 162.159.200.123 (2.rhel.pool.ntp.org) Dec 2 01:44:08 localhost sshd[1437]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost chronyd[763]: System clock TAI offset set to 37 seconds Dec 2 01:44:08 localhost sshd[1439]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost sshd[1442]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost dracut[1420]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Dec 2 01:44:08 localhost sshd[1457]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost cloud-init[1459]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Tue, 02 Dec 2025 06:44:08 +0000. Up 10.71 seconds. Dec 2 01:44:08 localhost sshd[1471]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost sshd[1521]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:08 localhost cloud-init[1544]: ############################################################# Dec 2 01:44:08 localhost cloud-init[1546]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Dec 2 01:44:08 localhost cloud-init[1551]: 256 SHA256:6iMR4dY6LoVZ8ymMJCbx8nHStl9AiNZS/eSMUs4v7Xg root@np0005541913.novalocal (ECDSA) Dec 2 01:44:08 localhost cloud-init[1558]: 256 SHA256:aGQRCWtPRCvEgwMYFOhpjkXRq2xRH4nHoTwtqSvGbEw root@np0005541913.novalocal (ED25519) Dec 2 01:44:08 localhost cloud-init[1564]: 3072 SHA256:PnFcFMPZH676c2vQCBk+PI38fOzVY44G0MSIBRfxQ4s root@np0005541913.novalocal (RSA) Dec 2 01:44:08 localhost cloud-init[1566]: -----END SSH HOST KEY FINGERPRINTS----- Dec 2 01:44:08 localhost cloud-init[1568]: ############################################################# Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 2 01:44:08 localhost cloud-init[1459]: Cloud-init v. 22.1-9.el9 finished at Tue, 02 Dec 2025 06:44:08 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 10.98 seconds Dec 2 01:44:08 localhost dracut[1420]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 2 01:44:08 localhost dracut[1420]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:08 localhost systemd[1]: Reloading Network Manager... Dec 2 01:44:08 localhost dracut[1420]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 2 01:44:08 localhost NetworkManager[790]: [1764657848.8882] audit: op="reload" arg="0" pid=1666 uid=0 result="success" Dec 2 01:44:08 localhost NetworkManager[790]: [1764657848.8892] config: signal: SIGHUP (no changes from disk) Dec 2 01:44:08 localhost systemd[1]: Reloaded Network Manager. Dec 2 01:44:08 localhost systemd[1]: Finished Execute cloud user/final scripts. Dec 2 01:44:08 localhost systemd[1]: Reached target Cloud-init target. Dec 2 01:44:08 localhost dracut[1420]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 2 01:44:08 localhost dracut[1420]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 2 01:44:09 localhost dracut[1420]: memstrack is not available Dec 2 01:44:09 localhost dracut[1420]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 2 01:44:09 localhost dracut[1420]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 2 01:44:09 localhost dracut[1420]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 2 01:44:09 localhost dracut[1420]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 2 01:44:09 localhost dracut[1420]: memstrack is not available Dec 2 01:44:09 localhost dracut[1420]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 2 01:44:09 localhost dracut[1420]: *** Including module: systemd *** Dec 2 01:44:09 localhost dracut[1420]: *** Including module: systemd-initrd *** Dec 2 01:44:09 localhost dracut[1420]: *** Including module: i18n *** Dec 2 01:44:09 localhost dracut[1420]: No KEYMAP configured. Dec 2 01:44:09 localhost dracut[1420]: *** Including module: drm *** Dec 2 01:44:10 localhost dracut[1420]: *** Including module: prefixdevname *** Dec 2 01:44:10 localhost dracut[1420]: *** Including module: kernel-modules *** Dec 2 01:44:10 localhost dracut[1420]: *** Including module: kernel-modules-extra *** Dec 2 01:44:10 localhost dracut[1420]: *** Including module: qemu *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: fstab-sys *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: rootfs-block *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: terminfo *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: udev-rules *** Dec 2 01:44:11 localhost dracut[1420]: Skipping udev rule: 91-permissions.rules Dec 2 01:44:11 localhost dracut[1420]: Skipping udev rule: 80-drivers-modprobe.rules Dec 2 01:44:11 localhost dracut[1420]: *** Including module: virtiofs *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: dracut-systemd *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: usrmount *** Dec 2 01:44:11 localhost dracut[1420]: *** Including module: base *** Dec 2 01:44:12 localhost dracut[1420]: *** Including module: fs-lib *** Dec 2 01:44:12 localhost dracut[1420]: *** Including module: kdumpbase *** Dec 2 01:44:12 localhost dracut[1420]: *** Including module: microcode_ctl-fw_dir_override *** Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl module: mangling fw_dir Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-2d-07" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-4e-03" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-4f-01" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-55-04" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-5e-03" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-8c-01" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Dec 2 01:44:12 localhost dracut[1420]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Dec 2 01:44:12 localhost dracut[1420]: *** Including module: shutdown *** Dec 2 01:44:12 localhost dracut[1420]: *** Including module: squash *** Dec 2 01:44:12 localhost dracut[1420]: *** Including modules done *** Dec 2 01:44:12 localhost dracut[1420]: *** Installing kernel module dependencies *** Dec 2 01:44:13 localhost dracut[1420]: *** Installing kernel module dependencies done *** Dec 2 01:44:13 localhost dracut[1420]: *** Resolving executable dependencies *** Dec 2 01:44:13 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:44:15 localhost dracut[1420]: *** Resolving executable dependencies done *** Dec 2 01:44:15 localhost dracut[1420]: *** Hardlinking files *** Dec 2 01:44:15 localhost dracut[1420]: Mode: real Dec 2 01:44:15 localhost dracut[1420]: Files: 1099 Dec 2 01:44:15 localhost dracut[1420]: Linked: 3 files Dec 2 01:44:15 localhost dracut[1420]: Compared: 0 xattrs Dec 2 01:44:15 localhost dracut[1420]: Compared: 373 files Dec 2 01:44:15 localhost dracut[1420]: Saved: 61.04 KiB Dec 2 01:44:15 localhost dracut[1420]: Duration: 0.024060 seconds Dec 2 01:44:15 localhost dracut[1420]: *** Hardlinking files done *** Dec 2 01:44:15 localhost dracut[1420]: Could not find 'strip'. Not stripping the initramfs. Dec 2 01:44:15 localhost dracut[1420]: *** Generating early-microcode cpio image *** Dec 2 01:44:15 localhost dracut[1420]: *** Constructing AuthenticAMD.bin *** Dec 2 01:44:15 localhost dracut[1420]: *** Store current command line parameters *** Dec 2 01:44:15 localhost dracut[1420]: Stored kernel commandline: Dec 2 01:44:15 localhost dracut[1420]: No dracut internal kernel commandline stored in the initramfs Dec 2 01:44:15 localhost dracut[1420]: *** Install squash loader *** Dec 2 01:44:15 localhost dracut[1420]: *** Squashing the files inside the initramfs *** Dec 2 01:44:16 localhost dracut[1420]: *** Squashing the files inside the initramfs done *** Dec 2 01:44:16 localhost dracut[1420]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Dec 2 01:44:17 localhost dracut[1420]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Dec 2 01:44:17 localhost kdumpctl[1133]: kdump: kexec: loaded kdump kernel Dec 2 01:44:17 localhost kdumpctl[1133]: kdump: Starting kdump: [OK] Dec 2 01:44:17 localhost systemd[1]: Finished Crash recovery kernel arming. Dec 2 01:44:17 localhost systemd[1]: Startup finished in 1.303s (kernel) + 1.974s (initrd) + 16.650s (userspace) = 19.928s. Dec 2 01:44:33 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 01:44:34 localhost sshd[4173]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:44:34 localhost systemd[1]: Created slice User Slice of UID 1000. Dec 2 01:44:34 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Dec 2 01:44:34 localhost systemd-logind[757]: New session 1 of user zuul. Dec 2 01:44:34 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Dec 2 01:44:34 localhost systemd[1]: Starting User Manager for UID 1000... Dec 2 01:44:35 localhost systemd[4177]: Queued start job for default target Main User Target. Dec 2 01:44:35 localhost systemd[4177]: Created slice User Application Slice. Dec 2 01:44:35 localhost systemd[4177]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 01:44:35 localhost systemd[4177]: Started Daily Cleanup of User's Temporary Directories. Dec 2 01:44:35 localhost systemd[4177]: Reached target Paths. Dec 2 01:44:35 localhost systemd[4177]: Reached target Timers. Dec 2 01:44:35 localhost systemd[4177]: Starting D-Bus User Message Bus Socket... Dec 2 01:44:35 localhost systemd[4177]: Starting Create User's Volatile Files and Directories... Dec 2 01:44:35 localhost systemd[4177]: Finished Create User's Volatile Files and Directories. Dec 2 01:44:35 localhost systemd[4177]: Listening on D-Bus User Message Bus Socket. Dec 2 01:44:35 localhost systemd[4177]: Reached target Sockets. Dec 2 01:44:35 localhost systemd[4177]: Reached target Basic System. Dec 2 01:44:35 localhost systemd[4177]: Reached target Main User Target. Dec 2 01:44:35 localhost systemd[4177]: Startup finished in 158ms. Dec 2 01:44:35 localhost systemd[1]: Started User Manager for UID 1000. Dec 2 01:44:35 localhost systemd[1]: Started Session 1 of User zuul. Dec 2 01:44:35 localhost python3[4229]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:44 localhost python3[4248]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:51 localhost python3[4301]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:44:52 localhost python3[4331]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Dec 2 01:44:55 localhost python3[4347]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:44:55 localhost python3[4361]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:44:57 localhost python3[4420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:44:57 localhost python3[4461]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657896.8804188-390-106100223893400/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:44:58 localhost python3[4534]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:44:59 localhost python3[4575]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657898.641792-492-76108552797260/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:01 localhost python3[4603]: ansible-ping Invoked with data=pong Dec 2 01:45:03 localhost python3[4618]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 01:45:06 localhost python3[4671]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Dec 2 01:45:09 localhost python3[4693]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:09 localhost python3[4707]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:09 localhost python3[4721]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:10 localhost python3[4735]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:10 localhost python3[4749]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:11 localhost python3[4763]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:13 localhost python3[4779]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:15 localhost python3[4827]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:15 localhost python3[4870]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657914.9314513-101-181543977722468/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:22 localhost python3[4898]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4927]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:23 localhost python3[4941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:24 localhost python3[4997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5025]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:25 localhost python3[5053]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:26 localhost python3[5067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:26 localhost python3[5081]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:26 localhost python3[5095]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:26 localhost python3[5109]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5123]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5151]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:27 localhost python3[5165]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5179]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5207]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:28 localhost python3[5221]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:29 localhost python3[5249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 01:45:31 localhost python3[5265]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 2 01:45:31 localhost systemd[1]: Starting Time & Date Service... Dec 2 01:45:31 localhost systemd[1]: Started Time & Date Service. Dec 2 01:45:31 localhost systemd-timedated[5267]: Changed time zone to 'UTC' (UTC). Dec 2 01:45:32 localhost python3[5286]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:33 localhost python3[5332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:34 localhost python3[5373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764657933.5532408-496-126059622645813/source _original_basename=tmpzjyyi0by follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:35 localhost python3[5433]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:35 localhost python3[5474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657935.0452213-585-65096931480121/source _original_basename=tmpikcfcn7u follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:37 localhost python3[5536]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:37 localhost python3[5579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657937.3060117-728-129315833383179/source _original_basename=tmpwxtt91e9 follow=False checksum=d3787dbc1d919dd7098cc7939d07e9b9a9d1522d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:39 localhost python3[5607]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:39 localhost python3[5623]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:40 localhost python3[5673]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:45:40 localhost python3[5716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657940.255052-854-280053912187743/source _original_basename=tmpj8ipvftm follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:45:42 localhost python3[5747]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2304-36f4-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:45:43 localhost python3[5765]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2304-36f4-000000000024-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Dec 2 01:45:45 localhost python3[5783]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:46:01 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 01:46:04 localhost python3[5802]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:47:04 localhost systemd-logind[757]: Session 1 logged out. Waiting for processes to exit. Dec 2 01:47:16 localhost systemd[4177]: Starting Mark boot as successful... Dec 2 01:47:16 localhost systemd[4177]: Finished Mark boot as successful. Dec 2 01:47:24 localhost chronyd[763]: Selected source 174.138.193.90 (2.rhel.pool.ntp.org) Dec 2 01:48:04 localhost systemd[1]: Unmounting EFI System Partition Automount... Dec 2 01:48:04 localhost systemd[1]: efi.mount: Deactivated successfully. Dec 2 01:48:04 localhost systemd[1]: Unmounted EFI System Partition Automount. Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Dec 2 01:49:42 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Dec 2 01:49:42 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9238] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 2 01:49:42 localhost systemd-udevd[5810]: Network interface NamePolicy= disabled on kernel command line. Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9365] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 2 01:49:42 localhost systemd[4177]: Created slice User Background Tasks Slice. Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9383] settings: (eth1): created default wired connection 'Wired connection 1' Dec 2 01:49:42 localhost systemd[4177]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9388] device (eth1): carrier: link connected Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9390] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9394] policy: auto-activating connection 'Wired connection 1' (35782912-d644-3ee2-930b-ef582ceabd4b) Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9398] device (eth1): Activation: starting connection 'Wired connection 1' (35782912-d644-3ee2-930b-ef582ceabd4b) Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9399] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9402] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9405] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 2 01:49:42 localhost NetworkManager[790]: [1764658182.9408] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:49:42 localhost systemd[4177]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 01:49:43 localhost sshd[5814]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:49:43 localhost systemd-logind[757]: New session 3 of user zuul. Dec 2 01:49:43 localhost systemd[1]: Started Session 3 of User zuul. Dec 2 01:49:43 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Dec 2 01:49:44 localhost python3[5831]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8e68-9bb8-000000000408-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:49:57 localhost python3[5881]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:49:57 localhost python3[5924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764658197.0543106-486-144637397494422/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d17281d2876b8cf83357ec6c9a421c589994e444 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:49:58 localhost python3[5954]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 01:49:58 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Dec 2 01:49:58 localhost systemd[1]: Stopped Network Manager Wait Online. Dec 2 01:49:58 localhost systemd[1]: Stopping Network Manager Wait Online... Dec 2 01:49:58 localhost systemd[1]: Stopping Network Manager... Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3387] caught SIGTERM, shutting down normally. Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3487] dhcp4 (eth0): canceled DHCP transaction Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3488] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3488] dhcp4 (eth0): state changed no lease Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3490] manager: NetworkManager state is now CONNECTING Dec 2 01:49:58 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3567] dhcp4 (eth1): canceled DHCP transaction Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3567] dhcp4 (eth1): state changed no lease Dec 2 01:49:58 localhost NetworkManager[790]: [1764658198.3618] exiting (success) Dec 2 01:49:58 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:49:58 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Dec 2 01:49:58 localhost systemd[1]: Stopped Network Manager. Dec 2 01:49:58 localhost systemd[1]: NetworkManager.service: Consumed 2.285s CPU time. Dec 2 01:49:58 localhost systemd[1]: Starting Network Manager... Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.4111] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:15f9a460-af10-408f-9e3d-85f564c0683d) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.4114] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.4141] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 2 01:49:58 localhost systemd[1]: Started Network Manager. Dec 2 01:49:58 localhost systemd[1]: Starting Network Manager Wait Online... Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.4196] manager[0x55da7c1f2090]: monitoring kernel firmware directory '/lib/firmware'. Dec 2 01:49:58 localhost systemd[1]: Starting Hostname Service... Dec 2 01:49:58 localhost systemd[1]: Started Hostname Service. Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5004] hostname: hostname: using hostnamed Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5005] hostname: static hostname changed from (none) to "np0005541913.novalocal" Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5013] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5020] manager[0x55da7c1f2090]: rfkill: Wi-Fi hardware radio set enabled Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5020] manager[0x55da7c1f2090]: rfkill: WWAN hardware radio set enabled Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5063] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5064] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5065] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5066] manager: Networking is enabled by state file Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5074] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5075] settings: Loaded settings plugin: keyfile (internal) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5126] dhcp: init: Using DHCP client 'internal' Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5130] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5140] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5151] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5167] device (lo): Activation: starting connection 'lo' (013d3e5c-fa64-43d6-9ac0-206896105ec9) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5179] device (eth0): carrier: link connected Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5185] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5193] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5194] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5209] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5222] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5233] device (eth1): carrier: link connected Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5240] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5250] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (35782912-d644-3ee2-930b-ef582ceabd4b) (indicated) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5251] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5262] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5276] device (eth1): Activation: starting connection 'Wired connection 1' (35782912-d644-3ee2-930b-ef582ceabd4b) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5305] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5311] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5313] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5317] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5322] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5327] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5331] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5335] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5345] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5352] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5368] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5372] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5428] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5435] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5445] device (lo): Activation: successful, device activated. Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5459] dhcp4 (eth0): state changed new lease, address=38.102.83.144 Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5461] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5549] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5582] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5583] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5586] manager: NetworkManager state is now CONNECTED_SITE Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5588] device (eth0): Activation: successful, device activated. Dec 2 01:49:58 localhost NetworkManager[5965]: [1764658198.5591] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 2 01:49:58 localhost python3[6013]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8e68-9bb8-00000000012b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:50:08 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:50:28 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 01:50:43 localhost NetworkManager[5965]: [1764658243.7817] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:43 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 01:50:43 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 01:50:43 localhost NetworkManager[5965]: [1764658243.8007] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:43 localhost NetworkManager[5965]: [1764658243.8012] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 2 01:50:43 localhost NetworkManager[5965]: [1764658243.8024] device (eth1): Activation: successful, device activated. Dec 2 01:50:43 localhost NetworkManager[5965]: [1764658243.8033] manager: startup complete Dec 2 01:50:43 localhost systemd[1]: Finished Network Manager Wait Online. Dec 2 01:50:53 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 01:50:58 localhost systemd[1]: session-3.scope: Deactivated successfully. Dec 2 01:50:58 localhost systemd[1]: session-3.scope: Consumed 1.356s CPU time. Dec 2 01:50:58 localhost systemd-logind[757]: Session 3 logged out. Waiting for processes to exit. Dec 2 01:50:58 localhost systemd-logind[757]: Removed session 3. Dec 2 01:51:41 localhost sshd[6055]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:51:41 localhost systemd-logind[757]: New session 4 of user zuul. Dec 2 01:51:41 localhost systemd[1]: Started Session 4 of User zuul. Dec 2 01:51:41 localhost python3[6106]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:51:42 localhost python3[6149]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658301.6792257-628-168368169528833/source _original_basename=tmpwb1pv6io follow=False checksum=c2b23ffe44719bb1642f7b68b2bf34d320a2a721 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:51:45 localhost systemd[1]: session-4.scope: Deactivated successfully. Dec 2 01:51:45 localhost systemd-logind[757]: Session 4 logged out. Waiting for processes to exit. Dec 2 01:51:45 localhost systemd-logind[757]: Removed session 4. Dec 2 01:57:38 localhost sshd[6168]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:57:39 localhost systemd-logind[757]: New session 5 of user zuul. Dec 2 01:57:39 localhost systemd[1]: Started Session 5 of User zuul. Dec 2 01:57:39 localhost python3[6187]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d02-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:40 localhost python3[6205]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:40 localhost python3[6221]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:41 localhost python3[6237]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:41 localhost python3[6253]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:42 localhost python3[6270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:43 localhost python3[6318]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 01:57:44 localhost python3[6361]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658663.2627435-645-88884789018823/source _original_basename=tmpenr4wqc1 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 01:57:45 localhost python3[6391]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 01:57:45 localhost systemd[1]: Reloading. Dec 2 01:57:45 localhost systemd-rc-local-generator[6407]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 01:57:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 01:57:47 localhost python3[6437]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Dec 2 01:57:48 localhost python3[6453]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:48 localhost python3[6471]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:49 localhost python3[6489]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:57:49 localhost python3[6507]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:58:00 localhost python3[6525]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d09-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:58:01 localhost python3[6544]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 01:58:04 localhost systemd-logind[757]: Session 5 logged out. Waiting for processes to exit. Dec 2 01:58:04 localhost systemd[1]: session-5.scope: Deactivated successfully. Dec 2 01:58:04 localhost systemd[1]: session-5.scope: Consumed 4.079s CPU time. Dec 2 01:58:04 localhost systemd-logind[757]: Removed session 5. Dec 2 01:59:16 localhost systemd[1]: Starting Cleanup of Temporary Directories... Dec 2 01:59:16 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Dec 2 01:59:16 localhost systemd[1]: Finished Cleanup of Temporary Directories. Dec 2 01:59:16 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Dec 2 01:59:18 localhost sshd[6555]: main: sshd: ssh-rsa algorithm is disabled Dec 2 01:59:18 localhost systemd-logind[757]: New session 6 of user zuul. Dec 2 01:59:18 localhost systemd[1]: Started Session 6 of User zuul. Dec 2 01:59:19 localhost systemd[1]: Starting RHSM dbus service... Dec 2 01:59:19 localhost systemd[1]: Started RHSM dbus service. Dec 2 01:59:19 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:19 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:19 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:19 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:20 localhost rhsm-service[6579]: INFO [subscription_manager.managerlib:90] Consumer created: np0005541913.novalocal (d1b4d74d-2a0e-41d6-a299-a10b4d7396a9) Dec 2 01:59:20 localhost subscription-manager[6579]: Registered system with identity: d1b4d74d-2a0e-41d6-a299-a10b4d7396a9 Dec 2 01:59:21 localhost rhsm-service[6579]: INFO [subscription_manager.entcertlib:131] certs updated: Dec 2 01:59:21 localhost rhsm-service[6579]: Total updates: 1 Dec 2 01:59:21 localhost rhsm-service[6579]: Found (local) serial# [] Dec 2 01:59:21 localhost rhsm-service[6579]: Expected (UEP) serial# [5614244909064200304] Dec 2 01:59:21 localhost rhsm-service[6579]: Added (new) Dec 2 01:59:21 localhost rhsm-service[6579]: [sn:5614244909064200304 ( Content Access,) @ /etc/pki/entitlement/5614244909064200304.pem] Dec 2 01:59:21 localhost rhsm-service[6579]: Deleted (rogue): Dec 2 01:59:21 localhost rhsm-service[6579]: Dec 2 01:59:21 localhost subscription-manager[6579]: Added subscription for 'Content Access' contract 'None' Dec 2 01:59:21 localhost subscription-manager[6579]: Added subscription for product ' Content Access' Dec 2 01:59:22 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:22 localhost rhsm-service[6579]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 2 01:59:22 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:22 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:22 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:22 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:22 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 01:59:30 localhost python3[6670]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-0809-2eed-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 01:59:32 localhost python3[6689]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:00:03 localhost setsebool[6764]: The virt_use_nfs policy boolean was changed to 1 by root Dec 2 02:00:03 localhost setsebool[6764]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Dec 2 02:00:12 localhost kernel: SELinux: Converting 407 SID table entries... Dec 2 02:00:12 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:00:12 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:00:12 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:00:12 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:00:12 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:00:12 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:00:12 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:00:24 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 2 02:00:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:00:24 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:00:24 localhost systemd[1]: Reloading. Dec 2 02:00:24 localhost systemd-rc-local-generator[7539]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:00:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:00:24 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:00:28 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:00:33 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:00:33 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:00:33 localhost systemd[1]: man-db-cache-update.service: Consumed 10.463s CPU time. Dec 2 02:00:33 localhost systemd[1]: run-r0543e0d27c2d4ab895c16ea57db181eb.service: Deactivated successfully. Dec 2 02:01:26 localhost systemd[1]: session-6.scope: Deactivated successfully. Dec 2 02:01:26 localhost systemd[1]: session-6.scope: Consumed 50.351s CPU time. Dec 2 02:01:26 localhost systemd-logind[757]: Session 6 logged out. Waiting for processes to exit. Dec 2 02:01:26 localhost systemd-logind[757]: Removed session 6. Dec 2 02:01:31 localhost sshd[18354]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:31 localhost systemd-logind[757]: New session 7 of user zuul. Dec 2 02:01:31 localhost systemd[1]: Started Session 7 of User zuul. Dec 2 02:01:31 localhost podman[18374]: 2025-12-02 07:01:31.918607628 +0000 UTC m=+0.105880147 system refresh Dec 2 02:01:32 localhost systemd[4177]: Starting D-Bus User Message Bus... Dec 2 02:01:32 localhost systemd[4177]: Started D-Bus User Message Bus. Dec 2 02:01:32 localhost dbus-broker-launch[18431]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 2 02:01:32 localhost dbus-broker-launch[18431]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 2 02:01:32 localhost journal[18431]: Ready Dec 2 02:01:32 localhost systemd[4177]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 2 02:01:32 localhost systemd[4177]: Created slice Slice /user. Dec 2 02:01:32 localhost systemd[4177]: podman-18414.scope: unit configures an IP firewall, but not running as root. Dec 2 02:01:32 localhost systemd[4177]: (This warning is only shown for the first unit using IP firewalling.) Dec 2 02:01:32 localhost systemd[4177]: Started podman-18414.scope. Dec 2 02:01:32 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:01:33 localhost systemd[4177]: Started podman-pause-5c4ae909.scope. Dec 2 02:01:35 localhost systemd[1]: session-7.scope: Deactivated successfully. Dec 2 02:01:35 localhost systemd[1]: session-7.scope: Consumed 1.095s CPU time. Dec 2 02:01:35 localhost systemd-logind[757]: Session 7 logged out. Waiting for processes to exit. Dec 2 02:01:35 localhost systemd-logind[757]: Removed session 7. Dec 2 02:01:51 localhost sshd[18438]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:51 localhost sshd[18434]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:51 localhost sshd[18437]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:51 localhost sshd[18436]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:51 localhost sshd[18435]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:56 localhost sshd[18444]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:01:56 localhost systemd-logind[757]: New session 8 of user zuul. Dec 2 02:01:56 localhost systemd[1]: Started Session 8 of User zuul. Dec 2 02:01:57 localhost python3[18461]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:01:57 localhost python3[18477]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:01:59 localhost systemd[1]: session-8.scope: Deactivated successfully. Dec 2 02:01:59 localhost systemd-logind[757]: Session 8 logged out. Waiting for processes to exit. Dec 2 02:01:59 localhost systemd-logind[757]: Removed session 8. Dec 2 02:03:27 localhost sshd[18479]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:03:27 localhost systemd-logind[757]: New session 9 of user zuul. Dec 2 02:03:27 localhost systemd[1]: Started Session 9 of User zuul. Dec 2 02:03:28 localhost python3[18498]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:03:29 localhost python3[18514]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:03:30 localhost python3[18564]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:31 localhost python3[18607]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659010.6460295-137-171557122913719/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:32 localhost python3[18669]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:32 localhost python3[18712]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659012.3983626-225-243319304045149/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:35 localhost python3[18742]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:36 localhost python3[18788]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:36 localhost python3[18804]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpbwl00bfs recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:37 localhost python3[18864]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:37 localhost python3[18880]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmp7eh3joiw recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:39 localhost python3[18940]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:03:39 localhost python3[18956]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmplienmmx0 recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:03:40 localhost systemd[1]: session-9.scope: Deactivated successfully. Dec 2 02:03:40 localhost systemd[1]: session-9.scope: Consumed 3.365s CPU time. Dec 2 02:03:40 localhost systemd-logind[757]: Session 9 logged out. Waiting for processes to exit. Dec 2 02:03:40 localhost systemd-logind[757]: Removed session 9. Dec 2 02:05:53 localhost sshd[18971]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:05:53 localhost systemd-logind[757]: New session 10 of user zuul. Dec 2 02:05:53 localhost systemd[1]: Started Session 10 of User zuul. Dec 2 02:05:53 localhost python3[19017]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:09:16 localhost sshd[19022]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:09:16 localhost sshd[19023]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:10:52 localhost systemd[1]: session-10.scope: Deactivated successfully. Dec 2 02:10:52 localhost systemd-logind[757]: Session 10 logged out. Waiting for processes to exit. Dec 2 02:10:52 localhost systemd-logind[757]: Removed session 10. Dec 2 02:13:06 localhost systemd[1]: Starting dnf makecache... Dec 2 02:13:06 localhost dnf[19027]: Updating Subscription Management repositories. Dec 2 02:13:08 localhost dnf[19027]: Failed determining last makecache time. Dec 2 02:13:08 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 36 kB/s | 4.5 kB 00:00 Dec 2 02:13:08 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 34 kB/s | 4.5 kB 00:00 Dec 2 02:13:08 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 30 kB/s | 4.1 kB 00:00 Dec 2 02:13:09 localhost dnf[19027]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 31 kB/s | 4.1 kB 00:00 Dec 2 02:13:09 localhost dnf[19027]: Metadata cache created. Dec 2 02:13:09 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 02:13:09 localhost systemd[1]: Finished dnf makecache. Dec 2 02:13:09 localhost systemd[1]: dnf-makecache.service: Consumed 2.628s CPU time. Dec 2 02:15:59 localhost sshd[19033]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:18:15 localhost sshd[19038]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:18:15 localhost systemd-logind[757]: New session 11 of user zuul. Dec 2 02:18:15 localhost systemd[1]: Started Session 11 of User zuul. Dec 2 02:18:16 localhost python3[19055]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:18:17 localhost python3[19075]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:18:22 localhost python3[19095]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Dec 2 02:18:25 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:18:25 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:21 localhost python3[19252]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Dec 2 02:19:23 localhost sshd[19255]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:19:24 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:33 localhost python3[19394]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Dec 2 02:19:37 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:37 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:19:43 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:05 localhost python3[19669]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 2 02:20:08 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:08 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:14 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:14 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:40 localhost python3[20005]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 2 02:20:44 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:44 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:49 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:20:50 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:21:15 localhost python3[20402]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000013-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:21:20 localhost python3[20421]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:21:41 localhost kernel: SELinux: Converting 490 SID table entries... Dec 2 02:21:41 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:21:41 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:21:41 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:21:41 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:21:41 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:21:41 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:21:41 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:21:42 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=4 res=1 Dec 2 02:21:42 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Dec 2 02:21:45 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:21:45 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:21:45 localhost systemd[1]: Reloading. Dec 2 02:21:45 localhost systemd-sysv-generator[21076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:21:45 localhost systemd-rc-local-generator[21072]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:21:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:21:45 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:21:46 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:21:46 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:21:46 localhost systemd[1]: run-ra43097f77f2f4ce88eea19a47a4833bf.service: Deactivated successfully. Dec 2 02:21:46 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:21:46 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 02:22:13 localhost python3[21631]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000015-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:22:45 localhost python3[21651]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:22:46 localhost python3[21699]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:22:46 localhost python3[21742]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764660165.6364863-293-247612159958287/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=9333f42ac4b9baf349a5c32f7bcba3335b5912e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:22:48 localhost python3[21772]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:48 localhost systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Dec 2 02:22:48 localhost systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:22:48 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:22:48 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:22:48 localhost python3[21793]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:48 localhost python3[21813]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:48 localhost python3[21833]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:49 localhost python3[21853]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 2 02:22:51 localhost python3[21873]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:22:51 localhost systemd[1]: Starting LSB: Bring up/down networking... Dec 2 02:22:51 localhost network[21876]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 02:22:51 localhost network[21887]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 02:22:51 localhost network[21876]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:51 localhost network[21888]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:51 localhost network[21876]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Dec 2 02:22:51 localhost network[21889]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 02:22:51 localhost NetworkManager[5965]: [1764660171.5884] audit: op="connections-reload" pid=21917 uid=0 result="success" Dec 2 02:22:51 localhost network[21876]: Bringing up loopback interface: [ OK ] Dec 2 02:22:51 localhost NetworkManager[5965]: [1764660171.7875] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22005 uid=0 result="success" Dec 2 02:22:51 localhost network[21876]: Bringing up interface eth0: [ OK ] Dec 2 02:22:51 localhost systemd[1]: Started LSB: Bring up/down networking. Dec 2 02:22:52 localhost python3[22046]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:22:52 localhost systemd[1]: Starting Open vSwitch Database Unit... Dec 2 02:22:52 localhost chown[22050]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Dec 2 02:22:52 localhost ovs-ctl[22055]: /etc/openvswitch/conf.db does not exist ... (warning). Dec 2 02:22:52 localhost ovs-ctl[22055]: Creating empty database /etc/openvswitch/conf.db [ OK ] Dec 2 02:22:52 localhost ovs-ctl[22055]: Starting ovsdb-server [ OK ] Dec 2 02:22:52 localhost ovs-vsctl[22105]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Dec 2 02:22:52 localhost ovs-vsctl[22125]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Dec 2 02:22:52 localhost ovs-ctl[22055]: Configuring Open vSwitch system IDs [ OK ] Dec 2 02:22:52 localhost ovs-ctl[22055]: Enabling remote OVSDB managers [ OK ] Dec 2 02:22:52 localhost systemd[1]: Started Open vSwitch Database Unit. Dec 2 02:22:52 localhost ovs-vsctl[22131]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541913.novalocal Dec 2 02:22:52 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Dec 2 02:22:52 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Dec 2 02:22:52 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Dec 2 02:22:52 localhost kernel: openvswitch: Open vSwitch switching datapath Dec 2 02:22:52 localhost ovs-ctl[22175]: Inserting openvswitch module [ OK ] Dec 2 02:22:52 localhost ovs-ctl[22144]: Starting ovs-vswitchd [ OK ] Dec 2 02:22:52 localhost ovs-vsctl[22194]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541913.novalocal Dec 2 02:22:52 localhost ovs-ctl[22144]: Enabling remote OVSDB managers [ OK ] Dec 2 02:22:52 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Dec 2 02:22:52 localhost systemd[1]: Starting Open vSwitch... Dec 2 02:22:52 localhost systemd[1]: Finished Open vSwitch. Dec 2 02:22:55 localhost python3[22212]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001a-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.4345] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22370 uid=0 result="success" Dec 2 02:22:56 localhost ifup[22371]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:22:56 localhost ifup[22372]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:56 localhost ifup[22373]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.4692] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22379 uid=0 result="success" Dec 2 02:22:56 localhost ovs-vsctl[22381]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:48:4f:22 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Dec 2 02:22:56 localhost kernel: device ovs-system entered promiscuous mode Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.4975] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Dec 2 02:22:56 localhost kernel: Timeout policy base is empty Dec 2 02:22:56 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Dec 2 02:22:56 localhost systemd-udevd[22383]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:22:56 localhost kernel: device br-ex entered promiscuous mode Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.5298] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.5511] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22409 uid=0 result="success" Dec 2 02:22:56 localhost NetworkManager[5965]: [1764660176.5684] device (br-ex): carrier: link connected Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.6243] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22438 uid=0 result="success" Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.6756] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22453 uid=0 result="success" Dec 2 02:22:59 localhost NET[22478]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.7586] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.7727] dhcp4 (eth1): canceled DHCP transaction Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.7728] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.7728] dhcp4 (eth1): state changed no lease Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.7789] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22487 uid=0 result="success" Dec 2 02:22:59 localhost ifup[22488]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:22:59 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 02:22:59 localhost ifup[22490]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:22:59 localhost ifup[22491]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:22:59 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.8150] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22504 uid=0 result="success" Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.8571] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22515 uid=0 result="success" Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.8639] device (eth1): carrier: link connected Dec 2 02:22:59 localhost NetworkManager[5965]: [1764660179.8859] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22524 uid=0 result="success" Dec 2 02:22:59 localhost ipv6_wait_tentative[22536]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 2 02:23:00 localhost ipv6_wait_tentative[22541]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 2 02:23:01 localhost NetworkManager[5965]: [1764660181.9505] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22550 uid=0 result="success" Dec 2 02:23:01 localhost ovs-vsctl[22565]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Dec 2 02:23:01 localhost kernel: device eth1 entered promiscuous mode Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.0127] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22573 uid=0 result="success" Dec 2 02:23:02 localhost ifup[22574]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:02 localhost ifup[22575]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:02 localhost ifup[22576]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.0403] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22582 uid=0 result="success" Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.0732] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22592 uid=0 result="success" Dec 2 02:23:02 localhost ifup[22593]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:02 localhost ifup[22594]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:02 localhost ifup[22595]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.0975] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22601 uid=0 result="success" Dec 2 02:23:02 localhost ovs-vsctl[22604]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 2 02:23:02 localhost kernel: device vlan23 entered promiscuous mode Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.1294] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Dec 2 02:23:02 localhost systemd-udevd[22606]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.1548] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22615 uid=0 result="success" Dec 2 02:23:02 localhost NetworkManager[5965]: [1764660182.1724] device (vlan23): carrier: link connected Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.2272] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22644 uid=0 result="success" Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.2765] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22659 uid=0 result="success" Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.3325] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22680 uid=0 result="success" Dec 2 02:23:05 localhost ifup[22681]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:05 localhost ifup[22682]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:05 localhost ifup[22683]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.3645] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22689 uid=0 result="success" Dec 2 02:23:05 localhost ovs-vsctl[22692]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 2 02:23:05 localhost kernel: device vlan20 entered promiscuous mode Dec 2 02:23:05 localhost systemd-udevd[22694]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.3984] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.4223] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22704 uid=0 result="success" Dec 2 02:23:05 localhost NetworkManager[5965]: [1764660185.4405] device (vlan20): carrier: link connected Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.4943] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22734 uid=0 result="success" Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.5432] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22749 uid=0 result="success" Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.6012] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22770 uid=0 result="success" Dec 2 02:23:08 localhost ifup[22771]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:08 localhost ifup[22772]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:08 localhost ifup[22773]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.6294] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22779 uid=0 result="success" Dec 2 02:23:08 localhost ovs-vsctl[22782]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 2 02:23:08 localhost kernel: device vlan21 entered promiscuous mode Dec 2 02:23:08 localhost systemd-udevd[22784]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.6777] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.7034] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22794 uid=0 result="success" Dec 2 02:23:08 localhost NetworkManager[5965]: [1764660188.7216] device (vlan21): carrier: link connected Dec 2 02:23:09 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.7755] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22824 uid=0 result="success" Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.8217] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22839 uid=0 result="success" Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.8835] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22860 uid=0 result="success" Dec 2 02:23:11 localhost ifup[22861]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:11 localhost ifup[22862]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:11 localhost ifup[22863]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.9141] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22869 uid=0 result="success" Dec 2 02:23:11 localhost ovs-vsctl[22872]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 2 02:23:11 localhost systemd-udevd[22874]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.9554] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Dec 2 02:23:11 localhost kernel: device vlan22 entered promiscuous mode Dec 2 02:23:11 localhost NetworkManager[5965]: [1764660191.9841] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22884 uid=0 result="success" Dec 2 02:23:12 localhost NetworkManager[5965]: [1764660192.0051] device (vlan22): carrier: link connected Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.0566] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22914 uid=0 result="success" Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.1104] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22929 uid=0 result="success" Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.1673] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22950 uid=0 result="success" Dec 2 02:23:15 localhost ifup[22951]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:15 localhost ifup[22952]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:15 localhost ifup[22953]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.1989] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22959 uid=0 result="success" Dec 2 02:23:15 localhost ovs-vsctl[22962]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 2 02:23:15 localhost systemd-udevd[22964]: Network interface NamePolicy= disabled on kernel command line. Dec 2 02:23:15 localhost kernel: device vlan44 entered promiscuous mode Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.2417] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.2678] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22974 uid=0 result="success" Dec 2 02:23:15 localhost NetworkManager[5965]: [1764660195.2880] device (vlan44): carrier: link connected Dec 2 02:23:18 localhost NetworkManager[5965]: [1764660198.3466] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23004 uid=0 result="success" Dec 2 02:23:18 localhost NetworkManager[5965]: [1764660198.3977] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23019 uid=0 result="success" Dec 2 02:23:18 localhost NetworkManager[5965]: [1764660198.4525] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23040 uid=0 result="success" Dec 2 02:23:18 localhost ifup[23041]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:18 localhost ifup[23042]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:18 localhost ifup[23043]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:18 localhost NetworkManager[5965]: [1764660198.4810] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23049 uid=0 result="success" Dec 2 02:23:18 localhost ovs-vsctl[23052]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 2 02:23:18 localhost NetworkManager[5965]: [1764660198.5763] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23059 uid=0 result="success" Dec 2 02:23:19 localhost NetworkManager[5965]: [1764660199.6430] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23086 uid=0 result="success" Dec 2 02:23:19 localhost NetworkManager[5965]: [1764660199.7000] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23101 uid=0 result="success" Dec 2 02:23:19 localhost NetworkManager[5965]: [1764660199.7622] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23122 uid=0 result="success" Dec 2 02:23:19 localhost ifup[23123]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:19 localhost ifup[23124]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:19 localhost ifup[23125]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:19 localhost NetworkManager[5965]: [1764660199.7994] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23131 uid=0 result="success" Dec 2 02:23:19 localhost ovs-vsctl[23134]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 2 02:23:19 localhost NetworkManager[5965]: [1764660199.9017] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23141 uid=0 result="success" Dec 2 02:23:20 localhost NetworkManager[5965]: [1764660200.9618] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23169 uid=0 result="success" Dec 2 02:23:21 localhost NetworkManager[5965]: [1764660201.0075] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23184 uid=0 result="success" Dec 2 02:23:21 localhost NetworkManager[5965]: [1764660201.0667] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23205 uid=0 result="success" Dec 2 02:23:21 localhost ifup[23206]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:21 localhost ifup[23207]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:21 localhost ifup[23208]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:21 localhost NetworkManager[5965]: [1764660201.0992] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23214 uid=0 result="success" Dec 2 02:23:21 localhost ovs-vsctl[23217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 2 02:23:21 localhost NetworkManager[5965]: [1764660201.2057] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23224 uid=0 result="success" Dec 2 02:23:22 localhost NetworkManager[5965]: [1764660202.2669] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23252 uid=0 result="success" Dec 2 02:23:22 localhost NetworkManager[5965]: [1764660202.3073] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23267 uid=0 result="success" Dec 2 02:23:22 localhost NetworkManager[5965]: [1764660202.3634] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23288 uid=0 result="success" Dec 2 02:23:22 localhost ifup[23289]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:22 localhost ifup[23290]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:22 localhost ifup[23291]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:22 localhost NetworkManager[5965]: [1764660202.3928] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23297 uid=0 result="success" Dec 2 02:23:22 localhost ovs-vsctl[23300]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 2 02:23:22 localhost NetworkManager[5965]: [1764660202.4928] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23307 uid=0 result="success" Dec 2 02:23:23 localhost NetworkManager[5965]: [1764660203.5532] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23335 uid=0 result="success" Dec 2 02:23:23 localhost NetworkManager[5965]: [1764660203.5969] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23350 uid=0 result="success" Dec 2 02:23:23 localhost NetworkManager[5965]: [1764660203.6414] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23371 uid=0 result="success" Dec 2 02:23:23 localhost ifup[23372]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 2 02:23:23 localhost ifup[23373]: 'network-scripts' will be removed from distribution in near future. Dec 2 02:23:23 localhost ifup[23374]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 2 02:23:23 localhost NetworkManager[5965]: [1764660203.6746] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23380 uid=0 result="success" Dec 2 02:23:23 localhost ovs-vsctl[23383]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 2 02:23:23 localhost NetworkManager[5965]: [1764660203.7247] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23390 uid=0 result="success" Dec 2 02:23:24 localhost NetworkManager[5965]: [1764660204.7820] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23418 uid=0 result="success" Dec 2 02:23:24 localhost NetworkManager[5965]: [1764660204.8318] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23433 uid=0 result="success" Dec 2 02:24:17 localhost python3[23465]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001b-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:23 localhost python3[23484]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:23 localhost python3[23500]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:25 localhost python3[23514]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:25 localhost python3[23530]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 2 02:24:26 localhost python3[23544]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Dec 2 02:24:27 localhost python3[23559]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005541913.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000022-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:28 localhost python3[23579]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:24:28 localhost systemd[1]: Starting Hostname Service... Dec 2 02:24:28 localhost systemd[1]: Started Hostname Service. Dec 2 02:24:28 localhost systemd-hostnamed[23583]: Hostname set to (static) Dec 2 02:24:28 localhost NetworkManager[5965]: [1764660268.5174] hostname: static hostname changed from "np0005541913.novalocal" to "np0005541913.localdomain" Dec 2 02:24:28 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 2 02:24:28 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 2 02:24:29 localhost systemd-logind[757]: Session 11 logged out. Waiting for processes to exit. Dec 2 02:24:29 localhost systemd[1]: session-11.scope: Deactivated successfully. Dec 2 02:24:29 localhost systemd[1]: session-11.scope: Consumed 1min 45.112s CPU time. Dec 2 02:24:29 localhost systemd-logind[757]: Removed session 11. Dec 2 02:24:32 localhost sshd[23594]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:24:32 localhost systemd-logind[757]: New session 12 of user zuul. Dec 2 02:24:32 localhost systemd[1]: Started Session 12 of User zuul. Dec 2 02:24:33 localhost python3[23611]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 2 02:24:34 localhost systemd[1]: session-12.scope: Deactivated successfully. Dec 2 02:24:34 localhost systemd-logind[757]: Session 12 logged out. Waiting for processes to exit. Dec 2 02:24:34 localhost systemd-logind[757]: Removed session 12. Dec 2 02:24:38 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 2 02:24:58 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 02:25:21 localhost sshd[23617]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:25:21 localhost systemd-logind[757]: New session 13 of user zuul. Dec 2 02:25:21 localhost systemd[1]: Started Session 13 of User zuul. Dec 2 02:25:22 localhost python3[23636]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:25:25 localhost systemd[1]: Reloading. Dec 2 02:25:25 localhost systemd-sysv-generator[23679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:25 localhost systemd-rc-local-generator[23674]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:26 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Dec 2 02:25:26 localhost systemd[1]: Reloading. Dec 2 02:25:26 localhost systemd-rc-local-generator[23717]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:26 localhost systemd-sysv-generator[23723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:26 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Dec 2 02:25:26 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Dec 2 02:25:26 localhost systemd[1]: Reloading. Dec 2 02:25:26 localhost systemd-rc-local-generator[23758]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:26 localhost systemd-sysv-generator[23763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:26 localhost systemd[1]: Listening on LVM2 poll daemon socket. Dec 2 02:25:27 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:25:27 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:25:27 localhost systemd[1]: Reloading. Dec 2 02:25:27 localhost systemd-rc-local-generator[23817]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:25:27 localhost systemd-sysv-generator[23820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:25:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:25:27 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:25:27 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:25:27 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:25:27 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:25:27 localhost systemd[1]: run-rdd2e5b96d89644679b2a565c6126d080.service: Deactivated successfully. Dec 2 02:25:27 localhost systemd[1]: run-r8216724a560f408db8309d093c592012.service: Deactivated successfully. Dec 2 02:26:28 localhost systemd[1]: session-13.scope: Deactivated successfully. Dec 2 02:26:28 localhost systemd[1]: session-13.scope: Consumed 4.552s CPU time. Dec 2 02:26:28 localhost systemd-logind[757]: Session 13 logged out. Waiting for processes to exit. Dec 2 02:26:28 localhost systemd-logind[757]: Removed session 13. Dec 2 02:42:20 localhost sshd[24413]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:42:20 localhost systemd-logind[757]: New session 14 of user zuul. Dec 2 02:42:20 localhost systemd[1]: Started Session 14 of User zuul. Dec 2 02:42:21 localhost python3[24461]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:42:23 localhost python3[24548]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:42:26 localhost python3[24565]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:42:26 localhost python3[24581]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:26 localhost kernel: loop: module loaded Dec 2 02:42:26 localhost kernel: loop3: detected capacity change from 0 to 14680064 Dec 2 02:42:27 localhost python3[24606]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:27 localhost lvm[24609]: PV /dev/loop3 not used. Dec 2 02:42:27 localhost lvm[24611]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:42:27 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Dec 2 02:42:27 localhost lvm[24620]: 1 logical volume(s) in volume group "ceph_vg0" now active Dec 2 02:42:27 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Dec 2 02:42:28 localhost python3[24668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:42:28 localhost python3[24711]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661347.997428-53935-71278920694817/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:29 localhost python3[24741]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:42:29 localhost systemd[1]: Reloading. Dec 2 02:42:29 localhost systemd-rc-local-generator[24767]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:42:29 localhost systemd-sysv-generator[24771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:42:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:42:29 localhost systemd[1]: Starting Ceph OSD losetup... Dec 2 02:42:29 localhost bash[24784]: /dev/loop3: [64516]:8402014 (/var/lib/ceph-osd-0.img) Dec 2 02:42:30 localhost systemd[1]: Finished Ceph OSD losetup. Dec 2 02:42:30 localhost lvm[24785]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:42:30 localhost lvm[24785]: VG ceph_vg0 finished Dec 2 02:42:31 localhost python3[24802]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:42:34 localhost python3[24819]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:42:35 localhost python3[24835]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:35 localhost kernel: loop4: detected capacity change from 0 to 14680064 Dec 2 02:42:35 localhost python3[24857]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:36 localhost lvm[24860]: PV /dev/loop4 not used. Dec 2 02:42:36 localhost lvm[24870]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:42:36 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Dec 2 02:42:36 localhost lvm[24872]: 1 logical volume(s) in volume group "ceph_vg1" now active Dec 2 02:42:36 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Dec 2 02:42:36 localhost python3[24920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:42:37 localhost python3[24963]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661356.5509481-54105-167353494703947/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:37 localhost python3[24993]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:42:37 localhost systemd[1]: Reloading. Dec 2 02:42:37 localhost systemd-rc-local-generator[25020]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:42:37 localhost systemd-sysv-generator[25026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:42:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:42:38 localhost systemd[1]: Starting Ceph OSD losetup... Dec 2 02:42:38 localhost bash[25034]: /dev/loop4: [64516]:8402047 (/var/lib/ceph-osd-1.img) Dec 2 02:42:38 localhost systemd[1]: Finished Ceph OSD losetup. Dec 2 02:42:38 localhost lvm[25035]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:42:38 localhost lvm[25035]: VG ceph_vg1 finished Dec 2 02:42:46 localhost python3[25080]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:42:48 localhost python3[25100]: ansible-hostname Invoked with name=np0005541913.localdomain use=None Dec 2 02:42:48 localhost systemd[1]: Starting Hostname Service... Dec 2 02:42:48 localhost systemd[1]: Started Hostname Service. Dec 2 02:42:55 localhost python3[25123]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 2 02:42:56 localhost python3[25171]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.twl1h7y6tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:56 localhost python3[25201]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.twl1h7y6tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:57 localhost python3[25217]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.twl1h7y6tmphosts insertbefore=BOF block=192.168.122.106 np0005541912.localdomain np0005541912#012192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane#012192.168.122.107 np0005541913.localdomain np0005541913#012192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane#012192.168.122.108 np0005541914.localdomain np0005541914#012192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane#012192.168.122.103 np0005541909.localdomain np0005541909#012192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane#012192.168.122.104 np0005541910.localdomain np0005541910#012192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane#012192.168.122.105 np0005541911.localdomain np0005541911#012192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:42:57 localhost python3[25233]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.twl1h7y6tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:42:58 localhost python3[25250]: ansible-file Invoked with path=/tmp/ansible.twl1h7y6tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:00 localhost python3[25266]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:01 localhost python3[25284]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:43:05 localhost python3[25333]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:43:06 localhost python3[25378]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661385.0698972-55051-133166024742249/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:07 localhost python3[25408]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:43:08 localhost python3[25426]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:43:08 localhost chronyd[763]: chronyd exiting Dec 2 02:43:08 localhost systemd[1]: Stopping NTP client/server... Dec 2 02:43:08 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 02:43:08 localhost systemd[1]: Stopped NTP client/server. Dec 2 02:43:08 localhost systemd[1]: chronyd.service: Consumed 90ms CPU time, read 1.9M from disk, written 0B to disk. Dec 2 02:43:08 localhost systemd[1]: Starting NTP client/server... Dec 2 02:43:08 localhost chronyd[25433]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 02:43:08 localhost chronyd[25433]: Frequency -26.710 +/- 0.093 ppm read from /var/lib/chrony/drift Dec 2 02:43:08 localhost chronyd[25433]: Loaded seccomp filter (level 2) Dec 2 02:43:08 localhost systemd[1]: Started NTP client/server. Dec 2 02:43:09 localhost python3[25482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:43:09 localhost python3[25525]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661388.7061102-55195-200133466110311/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:43:09 localhost python3[25555]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:43:09 localhost systemd[1]: Reloading. Dec 2 02:43:10 localhost systemd-rc-local-generator[25577]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:43:10 localhost systemd-sysv-generator[25580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:43:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:43:10 localhost systemd[1]: Reloading. Dec 2 02:43:10 localhost systemd-rc-local-generator[25617]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:43:10 localhost systemd-sysv-generator[25621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:43:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:43:10 localhost systemd[1]: Starting chronyd online sources service... Dec 2 02:43:10 localhost chronyc[25631]: 200 OK Dec 2 02:43:10 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 2 02:43:10 localhost systemd[1]: Finished chronyd online sources service. Dec 2 02:43:11 localhost python3[25647]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:11 localhost chronyd[25433]: System clock was stepped by 0.000000 seconds Dec 2 02:43:11 localhost python3[25664]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:43:12 localhost chronyd[25433]: Selected source 167.160.187.12 (pool.ntp.org) Dec 2 02:43:18 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 2 02:43:22 localhost python3[25685]: ansible-timezone Invoked with name=UTC hwclock=None Dec 2 02:43:22 localhost systemd[1]: Starting Time & Date Service... Dec 2 02:43:22 localhost systemd[1]: Started Time & Date Service. Dec 2 02:43:23 localhost python3[25705]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:43:23 localhost chronyd[25433]: chronyd exiting Dec 2 02:43:23 localhost systemd[1]: Stopping NTP client/server... Dec 2 02:43:23 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 02:43:23 localhost systemd[1]: Stopped NTP client/server. Dec 2 02:43:23 localhost systemd[1]: Starting NTP client/server... Dec 2 02:43:23 localhost chronyd[25712]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 02:43:23 localhost chronyd[25712]: Frequency -26.710 +/- 0.105 ppm read from /var/lib/chrony/drift Dec 2 02:43:23 localhost chronyd[25712]: Loaded seccomp filter (level 2) Dec 2 02:43:23 localhost systemd[1]: Started NTP client/server. Dec 2 02:43:27 localhost chronyd[25712]: Selected source 51.222.12.92 (pool.ntp.org) Dec 2 02:43:52 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 02:44:48 localhost sshd[25910]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:24 localhost sshd[25912]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:24 localhost systemd-logind[757]: New session 15 of user ceph-admin. Dec 2 02:45:24 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 2 02:45:24 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 2 02:45:24 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 2 02:45:24 localhost systemd[1]: Starting User Manager for UID 1002... Dec 2 02:45:24 localhost sshd[25930]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:24 localhost systemd[25916]: Queued start job for default target Main User Target. Dec 2 02:45:24 localhost systemd[25916]: Created slice User Application Slice. Dec 2 02:45:24 localhost systemd[25916]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 02:45:24 localhost systemd[25916]: Started Daily Cleanup of User's Temporary Directories. Dec 2 02:45:24 localhost systemd[25916]: Reached target Paths. Dec 2 02:45:24 localhost systemd[25916]: Reached target Timers. Dec 2 02:45:24 localhost systemd[25916]: Starting D-Bus User Message Bus Socket... Dec 2 02:45:24 localhost systemd[25916]: Starting Create User's Volatile Files and Directories... Dec 2 02:45:24 localhost systemd[25916]: Finished Create User's Volatile Files and Directories. Dec 2 02:45:24 localhost systemd[25916]: Listening on D-Bus User Message Bus Socket. Dec 2 02:45:24 localhost systemd[25916]: Reached target Sockets. Dec 2 02:45:24 localhost systemd[25916]: Reached target Basic System. Dec 2 02:45:24 localhost systemd[25916]: Reached target Main User Target. Dec 2 02:45:24 localhost systemd[25916]: Startup finished in 115ms. Dec 2 02:45:24 localhost systemd[1]: Started User Manager for UID 1002. Dec 2 02:45:24 localhost systemd[1]: Started Session 15 of User ceph-admin. Dec 2 02:45:24 localhost systemd-logind[757]: New session 17 of user ceph-admin. Dec 2 02:45:24 localhost systemd[1]: Started Session 17 of User ceph-admin. Dec 2 02:45:25 localhost sshd[25952]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:25 localhost systemd-logind[757]: New session 18 of user ceph-admin. Dec 2 02:45:25 localhost systemd[1]: Started Session 18 of User ceph-admin. Dec 2 02:45:25 localhost sshd[25971]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:25 localhost systemd-logind[757]: New session 19 of user ceph-admin. Dec 2 02:45:25 localhost systemd[1]: Started Session 19 of User ceph-admin. Dec 2 02:45:25 localhost sshd[25990]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:26 localhost systemd-logind[757]: New session 20 of user ceph-admin. Dec 2 02:45:26 localhost systemd[1]: Started Session 20 of User ceph-admin. Dec 2 02:45:26 localhost sshd[26009]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:26 localhost systemd-logind[757]: New session 21 of user ceph-admin. Dec 2 02:45:26 localhost systemd[1]: Started Session 21 of User ceph-admin. Dec 2 02:45:26 localhost sshd[26028]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:26 localhost systemd-logind[757]: New session 22 of user ceph-admin. Dec 2 02:45:26 localhost systemd[1]: Started Session 22 of User ceph-admin. Dec 2 02:45:27 localhost sshd[26047]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:27 localhost systemd-logind[757]: New session 23 of user ceph-admin. Dec 2 02:45:27 localhost systemd[1]: Started Session 23 of User ceph-admin. Dec 2 02:45:27 localhost sshd[26066]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:27 localhost systemd-logind[757]: New session 24 of user ceph-admin. Dec 2 02:45:27 localhost systemd[1]: Started Session 24 of User ceph-admin. Dec 2 02:45:27 localhost sshd[26085]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:27 localhost systemd-logind[757]: New session 25 of user ceph-admin. Dec 2 02:45:27 localhost systemd[1]: Started Session 25 of User ceph-admin. Dec 2 02:45:28 localhost sshd[26102]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:28 localhost systemd-logind[757]: New session 26 of user ceph-admin. Dec 2 02:45:28 localhost systemd[1]: Started Session 26 of User ceph-admin. Dec 2 02:45:28 localhost sshd[26121]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:45:28 localhost systemd-logind[757]: New session 27 of user ceph-admin. Dec 2 02:45:28 localhost systemd[1]: Started Session 27 of User ceph-admin. Dec 2 02:45:29 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:50 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26336 (sysctl) Dec 2 02:45:50 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Dec 2 02:45:50 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Dec 2 02:45:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:45:55 localhost kernel: VFS: idmapped mount is not enabled. Dec 2 02:46:16 localhost podman[26473]: Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:46:16.655947085 +0000 UTC m=+25.075775424 container create 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, release=1763362218, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, io.buildah.version=1.41.4) Dec 2 02:46:16 localhost systemd[1]: Created slice Slice /machine. Dec 2 02:46:16 localhost systemd[1]: Started libpod-conmon-94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2.scope. Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:45:51.623462588 +0000 UTC m=+0.043290987 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:16 localhost systemd[1]: Started libcrun container. Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:46:16.762123109 +0000 UTC m=+25.181951478 container init 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7) Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:46:16.772951905 +0000 UTC m=+25.192780274 container start 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, RELEASE=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container) Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:46:16.773217155 +0000 UTC m=+25.193045534 container attach 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:16 localhost happy_buck[26825]: 167 167 Dec 2 02:46:16 localhost systemd[1]: libpod-94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2.scope: Deactivated successfully. Dec 2 02:46:16 localhost podman[26473]: 2025-12-02 07:46:16.777949059 +0000 UTC m=+25.197777438 container died 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container) Dec 2 02:46:16 localhost podman[26830]: 2025-12-02 07:46:16.845179989 +0000 UTC m=+0.060997064 container remove 94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_buck, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, version=7, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:46:16 localhost systemd[1]: libpod-conmon-94b86ab6d6359020d2dc138f650f697a3966db4c57d6de8575c6b91eccb16ed2.scope: Deactivated successfully. Dec 2 02:46:17 localhost podman[26852]: Dec 2 02:46:17 localhost podman[26852]: 2025-12-02 07:46:17.031363708 +0000 UTC m=+0.043739984 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:17 localhost systemd[1]: tmp-crun.hIEeGd.mount: Deactivated successfully. Dec 2 02:46:17 localhost systemd[1]: var-lib-containers-storage-overlay-6d44bcb444b7d11fdc7e0e97e7099f04e6d4b16174b6a6be0898ba0d13d240fb-merged.mount: Deactivated successfully. Dec 2 02:46:20 localhost podman[26852]: 2025-12-02 07:46:20.283387774 +0000 UTC m=+3.295764060 container create ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 2 02:46:21 localhost systemd[1]: Started libpod-conmon-ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b.scope. Dec 2 02:46:21 localhost systemd[1]: Started libcrun container. Dec 2 02:46:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba515e25921f71f7c6ad0cd1c9f745e2567c058902d3018e6b9f8ec89ac9ebe1/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba515e25921f71f7c6ad0cd1c9f745e2567c058902d3018e6b9f8ec89ac9ebe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:21 localhost podman[26852]: 2025-12-02 07:46:21.046991223 +0000 UTC m=+4.059367469 container init ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, RELEASE=main, distribution-scope=public) Dec 2 02:46:21 localhost podman[26852]: 2025-12-02 07:46:21.10322402 +0000 UTC m=+4.115600276 container start ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public) Dec 2 02:46:21 localhost podman[26852]: 2025-12-02 07:46:21.103426507 +0000 UTC m=+4.115802753 container attach ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public) Dec 2 02:46:21 localhost magical_mestorf[27004]: [ Dec 2 02:46:21 localhost magical_mestorf[27004]: { Dec 2 02:46:21 localhost magical_mestorf[27004]: "available": false, Dec 2 02:46:21 localhost magical_mestorf[27004]: "ceph_device": false, Dec 2 02:46:21 localhost magical_mestorf[27004]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 02:46:21 localhost magical_mestorf[27004]: "lsm_data": {}, Dec 2 02:46:21 localhost magical_mestorf[27004]: "lvs": [], Dec 2 02:46:21 localhost magical_mestorf[27004]: "path": "/dev/sr0", Dec 2 02:46:21 localhost magical_mestorf[27004]: "rejected_reasons": [ Dec 2 02:46:21 localhost magical_mestorf[27004]: "Has a FileSystem", Dec 2 02:46:21 localhost magical_mestorf[27004]: "Insufficient space (<5GB)" Dec 2 02:46:21 localhost magical_mestorf[27004]: ], Dec 2 02:46:21 localhost magical_mestorf[27004]: "sys_api": { Dec 2 02:46:21 localhost magical_mestorf[27004]: "actuators": null, Dec 2 02:46:21 localhost magical_mestorf[27004]: "device_nodes": "sr0", Dec 2 02:46:21 localhost magical_mestorf[27004]: "human_readable_size": "482.00 KB", Dec 2 02:46:21 localhost magical_mestorf[27004]: "id_bus": "ata", Dec 2 02:46:21 localhost magical_mestorf[27004]: "model": "QEMU DVD-ROM", Dec 2 02:46:21 localhost magical_mestorf[27004]: "nr_requests": "2", Dec 2 02:46:21 localhost magical_mestorf[27004]: "partitions": {}, Dec 2 02:46:21 localhost magical_mestorf[27004]: "path": "/dev/sr0", Dec 2 02:46:21 localhost magical_mestorf[27004]: "removable": "1", Dec 2 02:46:21 localhost magical_mestorf[27004]: "rev": "2.5+", Dec 2 02:46:21 localhost magical_mestorf[27004]: "ro": "0", Dec 2 02:46:21 localhost magical_mestorf[27004]: "rotational": "1", Dec 2 02:46:21 localhost magical_mestorf[27004]: "sas_address": "", Dec 2 02:46:21 localhost magical_mestorf[27004]: "sas_device_handle": "", Dec 2 02:46:21 localhost magical_mestorf[27004]: "scheduler_mode": "mq-deadline", Dec 2 02:46:21 localhost magical_mestorf[27004]: "sectors": 0, Dec 2 02:46:21 localhost magical_mestorf[27004]: "sectorsize": "2048", Dec 2 02:46:21 localhost magical_mestorf[27004]: "size": 493568.0, Dec 2 02:46:21 localhost magical_mestorf[27004]: "support_discard": "0", Dec 2 02:46:21 localhost magical_mestorf[27004]: "type": "disk", Dec 2 02:46:21 localhost magical_mestorf[27004]: "vendor": "QEMU" Dec 2 02:46:21 localhost magical_mestorf[27004]: } Dec 2 02:46:21 localhost magical_mestorf[27004]: } Dec 2 02:46:21 localhost magical_mestorf[27004]: ] Dec 2 02:46:21 localhost systemd[1]: libpod-ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b.scope: Deactivated successfully. Dec 2 02:46:21 localhost podman[26852]: 2025-12-02 07:46:21.947669473 +0000 UTC m=+4.960045729 container died ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container) Dec 2 02:46:21 localhost systemd[1]: tmp-crun.d8KfNs.mount: Deactivated successfully. Dec 2 02:46:22 localhost systemd[1]: var-lib-containers-storage-overlay-ba515e25921f71f7c6ad0cd1c9f745e2567c058902d3018e6b9f8ec89ac9ebe1-merged.mount: Deactivated successfully. Dec 2 02:46:22 localhost podman[28390]: 2025-12-02 07:46:22.018417454 +0000 UTC m=+0.065282762 container remove ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_mestorf, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1763362218, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:46:22 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:22 localhost systemd[1]: libpod-conmon-ae053d1a0c166c191e92107c230c5743439471357e49b153457d55f3f725317b.scope: Deactivated successfully. Dec 2 02:46:22 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Dec 2 02:46:22 localhost systemd[1]: Closed Process Core Dump Socket. Dec 2 02:46:22 localhost systemd[1]: Stopping Process Core Dump Socket... Dec 2 02:46:22 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 2 02:46:22 localhost systemd[1]: Reloading. Dec 2 02:46:22 localhost systemd-rc-local-generator[28467]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:22 localhost systemd-sysv-generator[28471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:22 localhost systemd[1]: Reloading. Dec 2 02:46:22 localhost systemd-sysv-generator[28510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:22 localhost systemd-rc-local-generator[28504]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:52 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:52 localhost podman[28591]: Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.162474742 +0000 UTC m=+0.070588344 container create aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:46:52 localhost systemd[1]: Started libpod-conmon-aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f.scope. Dec 2 02:46:52 localhost systemd[1]: Started libcrun container. Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.228267635 +0000 UTC m=+0.136381237 container init aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.133362029 +0000 UTC m=+0.041475631 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.239006708 +0000 UTC m=+0.147120320 container start aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, version=7, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.239325946 +0000 UTC m=+0.147439618 container attach aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 2 02:46:52 localhost inspiring_poincare[28606]: 167 167 Dec 2 02:46:52 localhost systemd[1]: libpod-aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f.scope: Deactivated successfully. Dec 2 02:46:52 localhost podman[28591]: 2025-12-02 07:46:52.242643336 +0000 UTC m=+0.150756958 container died aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=1763362218, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:52 localhost podman[28611]: 2025-12-02 07:46:52.333962274 +0000 UTC m=+0.077475281 container remove aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_poincare, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z) Dec 2 02:46:52 localhost systemd[1]: libpod-conmon-aace7dbcc756d974daf1759eca0fc9b8dfa02b496836b21add4c426d6a83606f.scope: Deactivated successfully. Dec 2 02:46:52 localhost systemd[1]: Reloading. Dec 2 02:46:52 localhost systemd-rc-local-generator[28654]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:52 localhost systemd-sysv-generator[28658]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:52 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:52 localhost systemd[1]: Reloading. Dec 2 02:46:52 localhost systemd-rc-local-generator[28691]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:52 localhost systemd-sysv-generator[28694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:52 localhost systemd[1]: Reached target All Ceph clusters and services. Dec 2 02:46:52 localhost systemd[1]: Reloading. Dec 2 02:46:52 localhost systemd-rc-local-generator[28727]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:52 localhost systemd-sysv-generator[28730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:53 localhost systemd[1]: Reached target Ceph cluster c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:53 localhost systemd[1]: Reloading. Dec 2 02:46:53 localhost systemd-rc-local-generator[28770]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:53 localhost systemd-sysv-generator[28773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:53 localhost systemd[1]: Reloading. Dec 2 02:46:53 localhost systemd-rc-local-generator[28805]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:46:53 localhost systemd-sysv-generator[28812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:46:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:46:53 localhost systemd[1]: Created slice Slice /system/ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:53 localhost systemd[1]: Reached target System Time Set. Dec 2 02:46:53 localhost systemd[1]: Reached target System Time Synchronized. Dec 2 02:46:53 localhost systemd[1]: Starting Ceph crash.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:46:53 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:53 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 2 02:46:53 localhost podman[28869]: Dec 2 02:46:53 localhost podman[28869]: 2025-12-02 07:46:53.972059562 +0000 UTC m=+0.073774240 container create 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, version=7, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 2 02:46:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf37503ebf93858839be6ff8621087588dcfd2cd5142a47fc29d2d0ec632bd4/merged/etc/ceph/ceph.client.crash.np0005541913.keyring supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf37503ebf93858839be6ff8621087588dcfd2cd5142a47fc29d2d0ec632bd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:54 localhost podman[28869]: 2025-12-02 07:46:53.942203759 +0000 UTC m=+0.043918427 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf37503ebf93858839be6ff8621087588dcfd2cd5142a47fc29d2d0ec632bd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:54 localhost podman[28869]: 2025-12-02 07:46:54.071609275 +0000 UTC m=+0.173323943 container init 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:46:54 localhost podman[28869]: 2025-12-02 07:46:54.0865101 +0000 UTC m=+0.188224768 container start 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z) Dec 2 02:46:54 localhost bash[28869]: 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 Dec 2 02:46:54 localhost systemd[1]: Started Ceph crash.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: INFO:ceph-crash:pinging cluster to exercise our key Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.258+0000 7efc164c7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.258+0000 7efc164c7640 -1 AuthRegistry(0x7efc100680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.259+0000 7efc164c7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.259+0000 7efc164c7640 -1 AuthRegistry(0x7efc164c6000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.266+0000 7efc14a3d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.268+0000 7efc0ffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.268+0000 7efc0f7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: 2025-12-02T07:46:54.268+0000 7efc164c7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: [errno 13] RADOS permission denied (error connecting to the cluster) Dec 2 02:46:54 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913[28883]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Dec 2 02:46:54 localhost podman[28970]: Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.88853021 +0000 UTC m=+0.058818583 container create d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, RELEASE=main, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218) Dec 2 02:46:54 localhost systemd[1]: Started libpod-conmon-d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd.scope. Dec 2 02:46:54 localhost systemd[1]: Started libcrun container. Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.948825804 +0000 UTC m=+0.119114177 container init d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.956317428 +0000 UTC m=+0.126605801 container start d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.956488803 +0000 UTC m=+0.126777206 container attach d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, ceph=True, RELEASE=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4) Dec 2 02:46:54 localhost quizzical_albattani[28985]: 167 167 Dec 2 02:46:54 localhost systemd[1]: libpod-d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd.scope: Deactivated successfully. Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.960771299 +0000 UTC m=+0.131059702 container died d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:46:54 localhost podman[28970]: 2025-12-02 07:46:54.862567513 +0000 UTC m=+0.032855936 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:55 localhost podman[28990]: 2025-12-02 07:46:55.017996888 +0000 UTC m=+0.048354389 container remove d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_albattani, io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 02:46:55 localhost systemd[1]: libpod-conmon-d0edac2ecf6b5e6c4d1b4ad4d62ddb31075f7c9c9e3e7e880f5328be6d1815fd.scope: Deactivated successfully. Dec 2 02:46:55 localhost systemd[1]: var-lib-containers-storage-overlay-8e77ead6f51af6fbaea5426585741a435352adfad94e0763689ddff20894ceee-merged.mount: Deactivated successfully. Dec 2 02:46:55 localhost podman[29010]: Dec 2 02:46:55 localhost podman[29010]: 2025-12-02 07:46:55.204588811 +0000 UTC m=+0.067070578 container create d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 02:46:55 localhost systemd[1]: Started libpod-conmon-d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045.scope. Dec 2 02:46:55 localhost systemd[1]: Started libcrun container. Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost podman[29010]: 2025-12-02 07:46:55.178912802 +0000 UTC m=+0.041394589 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Dec 2 02:46:55 localhost podman[29010]: 2025-12-02 07:46:55.324756125 +0000 UTC m=+0.187237882 container init d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, com.redhat.component=rhceph-container, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:46:55 localhost podman[29010]: 2025-12-02 07:46:55.333723449 +0000 UTC m=+0.196205206 container start d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph) Dec 2 02:46:55 localhost podman[29010]: 2025-12-02 07:46:55.333932355 +0000 UTC m=+0.196414142 container attach d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, release=1763362218, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7) Dec 2 02:46:55 localhost elastic_shockley[29026]: --> passed data devices: 0 physical, 2 LVM Dec 2 02:46:55 localhost elastic_shockley[29026]: --> relative data size: 1.0 Dec 2 02:46:55 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:55 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 79866ec3-47a0-4109-900e-7f4b902017d5 Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:56 localhost lvm[29080]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 02:46:56 localhost lvm[29080]: VG ceph_vg0 finished Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap Dec 2 02:46:56 localhost elastic_shockley[29026]: stderr: got monmap epoch 3 Dec 2 02:46:56 localhost elastic_shockley[29026]: --> Creating keyring file for osd.0 Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Dec 2 02:46:56 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 79866ec3-47a0-4109-900e-7f4b902017d5 --setuser ceph --setgroup ceph Dec 2 02:46:59 localhost elastic_shockley[29026]: stderr: 2025-12-02T07:46:56.993+0000 7f0daf5f7a80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 2 02:46:59 localhost elastic_shockley[29026]: stderr: 2025-12-02T07:46:56.993+0000 7f0daf5f7a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid Dec 2 02:46:59 localhost elastic_shockley[29026]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:46:59 localhost elastic_shockley[29026]: --> ceph-volume lvm activate successful for osd ID: 0 Dec 2 02:46:59 localhost elastic_shockley[29026]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:46:59 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 580fd654-ce1e-4384-8610-e58c3d508de1 Dec 2 02:47:00 localhost lvm[30024]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 02:47:00 localhost lvm[30024]: VG ceph_vg1 finished Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3 Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap Dec 2 02:47:00 localhost elastic_shockley[29026]: stderr: got monmap epoch 3 Dec 2 02:47:00 localhost elastic_shockley[29026]: --> Creating keyring file for osd.3 Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/ Dec 2 02:47:00 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 580fd654-ce1e-4384-8610-e58c3d508de1 --setuser ceph --setgroup ceph Dec 2 02:47:03 localhost elastic_shockley[29026]: stderr: 2025-12-02T07:47:00.818+0000 7f9dab183a80 -1 bluestore(/var/lib/ceph/osd/ceph-3//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 2 02:47:03 localhost elastic_shockley[29026]: stderr: 2025-12-02T07:47:00.818+0000 7f9dab183a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid Dec 2 02:47:03 localhost elastic_shockley[29026]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-3 --no-mon-config Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:03 localhost elastic_shockley[29026]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:03 localhost elastic_shockley[29026]: --> ceph-volume lvm activate successful for osd ID: 3 Dec 2 02:47:03 localhost elastic_shockley[29026]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Dec 2 02:47:03 localhost systemd[1]: libpod-d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045.scope: Deactivated successfully. Dec 2 02:47:03 localhost systemd[1]: libpod-d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045.scope: Consumed 3.679s CPU time. Dec 2 02:47:03 localhost podman[30938]: 2025-12-02 07:47:03.493280409 +0000 UTC m=+0.051486573 container died d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True) Dec 2 02:47:03 localhost systemd[1]: var-lib-containers-storage-overlay-a57812ead10bf640953eb7ea3baeeef0a816e07be8ab45bada3efc4ed2639414-merged.mount: Deactivated successfully. Dec 2 02:47:03 localhost podman[30938]: 2025-12-02 07:47:03.521134548 +0000 UTC m=+0.079340652 container remove d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, GIT_CLEAN=True, ceph=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.component=rhceph-container) Dec 2 02:47:03 localhost systemd[1]: libpod-conmon-d14bc450c67fbc207c1be6d406e9a476cc2a4e3f77b8e2f121434ee3c8c0f045.scope: Deactivated successfully. Dec 2 02:47:04 localhost podman[31020]: Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.167788636 +0000 UTC m=+0.061308512 container create d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Dec 2 02:47:04 localhost systemd[1]: Started libpod-conmon-d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5.scope. Dec 2 02:47:04 localhost systemd[1]: Started libcrun container. Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.226753532 +0000 UTC m=+0.120273398 container init d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1763362218) Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.234242695 +0000 UTC m=+0.127762571 container start d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, version=7, RELEASE=main, distribution-scope=public, release=1763362218) Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.234523974 +0000 UTC m=+0.128043900 container attach d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218) Dec 2 02:47:04 localhost quirky_mestorf[31036]: 167 167 Dec 2 02:47:04 localhost systemd[1]: libpod-d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5.scope: Deactivated successfully. Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.138296312 +0000 UTC m=+0.031816178 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:04 localhost podman[31020]: 2025-12-02 07:47:04.237802823 +0000 UTC m=+0.131322759 container died d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, architecture=x86_64) Dec 2 02:47:04 localhost podman[31041]: 2025-12-02 07:47:04.30926866 +0000 UTC m=+0.065766883 container remove d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_mestorf, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Dec 2 02:47:04 localhost systemd[1]: libpod-conmon-d2b30f96796c44295bcf2958aaf620a83a9feee9ab7da15fc6f60ac6109169c5.scope: Deactivated successfully. Dec 2 02:47:04 localhost systemd[1]: var-lib-containers-storage-overlay-17f193f8448a675432606997dfbb8f973caa4e078b2083d726cda503b6f61a95-merged.mount: Deactivated successfully. Dec 2 02:47:04 localhost podman[31062]: Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.516966218 +0000 UTC m=+0.074021297 container create 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container) Dec 2 02:47:04 localhost systemd[1]: Started libpod-conmon-60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af.scope. Dec 2 02:47:04 localhost systemd[1]: Started libcrun container. Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92053208b7194d38b3f2182affca70eaf9aa9a799a24a7136a6eac170a0dcae/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.487908697 +0000 UTC m=+0.044963876 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92053208b7194d38b3f2182affca70eaf9aa9a799a24a7136a6eac170a0dcae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92053208b7194d38b3f2182affca70eaf9aa9a799a24a7136a6eac170a0dcae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.612964204 +0000 UTC m=+0.170019283 container init 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 02:47:04 localhost systemd[1]: tmp-crun.aeqEDh.mount: Deactivated successfully. Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.626186604 +0000 UTC m=+0.183241683 container start 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.626556514 +0000 UTC m=+0.183611593 container attach 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, build-date=2025-11-26T19:44:28Z, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, name=rhceph, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7) Dec 2 02:47:04 localhost pensive_ritchie[31077]: { Dec 2 02:47:04 localhost pensive_ritchie[31077]: "0": [ Dec 2 02:47:04 localhost pensive_ritchie[31077]: { Dec 2 02:47:04 localhost pensive_ritchie[31077]: "devices": [ Dec 2 02:47:04 localhost pensive_ritchie[31077]: "/dev/loop3" Dec 2 02:47:04 localhost pensive_ritchie[31077]: ], Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_name": "ceph_lv0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_size": "7511998464", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZLSqh9-iILz-8uhj-aKI4-uLLc-tJ4e-DU20Ju,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=79866ec3-47a0-4109-900e-7f4b902017d5,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_uuid": "ZLSqh9-iILz-8uhj-aKI4-uLLc-tJ4e-DU20Ju", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "name": "ceph_lv0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "path": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "tags": { Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.block_uuid": "ZLSqh9-iILz-8uhj-aKI4-uLLc-tJ4e-DU20Ju", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cephx_lockbox_secret": "", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cluster_name": "ceph", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.crush_device_class": "", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.encrypted": "0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osd_fsid": "79866ec3-47a0-4109-900e-7f4b902017d5", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osd_id": "0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osdspec_affinity": "default_drive_group", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.type": "block", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.vdo": "0" Dec 2 02:47:04 localhost pensive_ritchie[31077]: }, Dec 2 02:47:04 localhost pensive_ritchie[31077]: "type": "block", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "vg_name": "ceph_vg0" Dec 2 02:47:04 localhost pensive_ritchie[31077]: } Dec 2 02:47:04 localhost pensive_ritchie[31077]: ], Dec 2 02:47:04 localhost pensive_ritchie[31077]: "3": [ Dec 2 02:47:04 localhost pensive_ritchie[31077]: { Dec 2 02:47:04 localhost pensive_ritchie[31077]: "devices": [ Dec 2 02:47:04 localhost pensive_ritchie[31077]: "/dev/loop4" Dec 2 02:47:04 localhost pensive_ritchie[31077]: ], Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_name": "ceph_lv1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_size": "7511998464", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uizRyl-UtyY-UzC3-C8WR-dfjh-VTZH-QMxT4X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=580fd654-ce1e-4384-8610-e58c3d508de1,ceph.osd_id=3,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "lv_uuid": "uizRyl-UtyY-UzC3-C8WR-dfjh-VTZH-QMxT4X", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "name": "ceph_lv1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "path": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "tags": { Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.block_uuid": "uizRyl-UtyY-UzC3-C8WR-dfjh-VTZH-QMxT4X", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cephx_lockbox_secret": "", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.cluster_name": "ceph", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.crush_device_class": "", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.encrypted": "0", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osd_fsid": "580fd654-ce1e-4384-8610-e58c3d508de1", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osd_id": "3", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.osdspec_affinity": "default_drive_group", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.type": "block", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "ceph.vdo": "0" Dec 2 02:47:04 localhost pensive_ritchie[31077]: }, Dec 2 02:47:04 localhost pensive_ritchie[31077]: "type": "block", Dec 2 02:47:04 localhost pensive_ritchie[31077]: "vg_name": "ceph_vg1" Dec 2 02:47:04 localhost pensive_ritchie[31077]: } Dec 2 02:47:04 localhost pensive_ritchie[31077]: ] Dec 2 02:47:04 localhost pensive_ritchie[31077]: } Dec 2 02:47:04 localhost systemd[1]: libpod-60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af.scope: Deactivated successfully. Dec 2 02:47:04 localhost podman[31062]: 2025-12-02 07:47:04.952787762 +0000 UTC m=+0.509842881 container died 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, ceph=True) Dec 2 02:47:05 localhost podman[31086]: 2025-12-02 07:47:05.023071297 +0000 UTC m=+0.062583126 container remove 60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_ritchie, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1763362218, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 2 02:47:05 localhost systemd[1]: libpod-conmon-60cb85d7752ac58b3b50b5b080280d52b4755bb5468209588b6d0da06deb12af.scope: Deactivated successfully. Dec 2 02:47:05 localhost systemd[1]: var-lib-containers-storage-overlay-b92053208b7194d38b3f2182affca70eaf9aa9a799a24a7136a6eac170a0dcae-merged.mount: Deactivated successfully. Dec 2 02:47:05 localhost podman[31168]: Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.760903398 +0000 UTC m=+0.062357290 container create 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, vcs-type=git) Dec 2 02:47:05 localhost systemd[1]: Started libpod-conmon-39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f.scope. Dec 2 02:47:05 localhost systemd[1]: Started libcrun container. Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.830489154 +0000 UTC m=+0.131943056 container init 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, ceph=True, release=1763362218, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.739477155 +0000 UTC m=+0.040931027 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:05 localhost systemd[1]: tmp-crun.yAdrR3.mount: Deactivated successfully. Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.842840551 +0000 UTC m=+0.144294443 container start 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7) Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.843106178 +0000 UTC m=+0.144560050 container attach 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=1763362218, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, version=7) Dec 2 02:47:05 localhost wonderful_goldstine[31182]: 167 167 Dec 2 02:47:05 localhost systemd[1]: libpod-39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f.scope: Deactivated successfully. Dec 2 02:47:05 localhost podman[31168]: 2025-12-02 07:47:05.846059508 +0000 UTC m=+0.147513440 container died 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph) Dec 2 02:47:05 localhost podman[31187]: 2025-12-02 07:47:05.941961081 +0000 UTC m=+0.083967739 container remove 39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_goldstine, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Dec 2 02:47:05 localhost systemd[1]: libpod-conmon-39bed095e95bda3337abed181f66e22fb92632d4191ffc5171b124a2574fc17f.scope: Deactivated successfully. Dec 2 02:47:06 localhost podman[31216]: Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.273272727 +0000 UTC m=+0.070116282 container create 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 02:47:06 localhost systemd[1]: Started libpod-conmon-94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b.scope. Dec 2 02:47:06 localhost systemd[1]: Started libcrun container. Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.247344311 +0000 UTC m=+0.044187856 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.382202854 +0000 UTC m=+0.179046399 container init 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True) Dec 2 02:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-4800bc8bec066dbb74d2a6401a155f1e928ad0cc52c02cce9a1b4762d710b134-merged.mount: Deactivated successfully. Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.532292784 +0000 UTC m=+0.329136339 container start 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main) Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.533282631 +0000 UTC m=+0.330126186 container attach 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, distribution-scope=public, io.openshift.expose-services=) Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test[31231]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test[31231]: [--no-systemd] [--no-tmpfs] Dec 2 02:47:06 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test[31231]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 2 02:47:06 localhost systemd[1]: libpod-94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b.scope: Deactivated successfully. Dec 2 02:47:06 localhost podman[31216]: 2025-12-02 07:47:06.672208596 +0000 UTC m=+0.469052171 container died 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, maintainer=Guillaume Abrioux , distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main) Dec 2 02:47:06 localhost systemd[1]: tmp-crun.Rra5J8.mount: Deactivated successfully. Dec 2 02:47:06 localhost systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Dec 2 02:47:06 localhost systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:47:06 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-c70add2ddcf31ea6a188a20d36e456d3936dd0b7e56854bd41976e8a86b26a69-merged.mount: Deactivated successfully. Dec 2 02:47:06 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:47:06 localhost podman[31236]: 2025-12-02 07:47:06.813961937 +0000 UTC m=+0.132040098 container remove 94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate-test, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 2 02:47:06 localhost systemd[1]: libpod-conmon-94f280b7b4a48469f40b1e6c35e28cd89a4bf269a9e6e5dd935529160e5c559b.scope: Deactivated successfully. Dec 2 02:47:07 localhost systemd[1]: Reloading. Dec 2 02:47:07 localhost systemd-sysv-generator[31292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:07 localhost systemd-rc-local-generator[31288]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:07 localhost systemd[1]: Reloading. Dec 2 02:47:07 localhost systemd-rc-local-generator[31335]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:07 localhost systemd-sysv-generator[31341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:07 localhost systemd[1]: Starting Ceph osd.0 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:47:07 localhost podman[31397]: Dec 2 02:47:07 localhost podman[31397]: 2025-12-02 07:47:07.870912293 +0000 UTC m=+0.071717465 container create be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Dec 2 02:47:07 localhost systemd[1]: tmp-crun.aC0Cbp.mount: Deactivated successfully. Dec 2 02:47:07 localhost systemd[1]: Started libcrun container. Dec 2 02:47:07 localhost podman[31397]: 2025-12-02 07:47:07.842486998 +0000 UTC m=+0.043292230 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:07 localhost podman[31397]: 2025-12-02 07:47:07.993879414 +0000 UTC m=+0.194684586 container init be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Dec 2 02:47:08 localhost podman[31397]: 2025-12-02 07:47:08.003970068 +0000 UTC m=+0.204775230 container start be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, ceph=True, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 02:47:08 localhost podman[31397]: 2025-12-02 07:47:08.004267896 +0000 UTC m=+0.205073108 container attach be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, RELEASE=main, name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:47:08 localhost bash[31397]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Dec 2 02:47:08 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate[31413]: --> ceph-volume raw activate successful for osd ID: 0 Dec 2 02:47:08 localhost bash[31397]: --> ceph-volume raw activate successful for osd ID: 0 Dec 2 02:47:08 localhost systemd[1]: libpod-be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5.scope: Deactivated successfully. Dec 2 02:47:08 localhost podman[31397]: 2025-12-02 07:47:08.673221791 +0000 UTC m=+0.874026973 container died be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public) Dec 2 02:47:08 localhost podman[31543]: 2025-12-02 07:47:08.754358162 +0000 UTC m=+0.071265572 container remove be0399117839d0b075122d6dadb60dd3d9308f657472f6ee566b94db47904bd5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0-activate, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 2 02:47:08 localhost systemd[1]: var-lib-containers-storage-overlay-0f4780a0ba6841bc2e86e32d453138c911a37f7070f864b5253fa4fdb79eaafe-merged.mount: Deactivated successfully. Dec 2 02:47:09 localhost podman[31604]: Dec 2 02:47:09 localhost podman[31604]: 2025-12-02 07:47:09.045485123 +0000 UTC m=+0.070514512 container create 3886dff7ff9b490471697e906f326979cbdf63a2e30cddc8480dbb69bd74263a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, RELEASE=main, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457f1da0c93fed37fafc473be3799786fad35963b1e4f95fba7b386a56ef4ea/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost podman[31604]: 2025-12-02 07:47:09.016042281 +0000 UTC m=+0.041071720 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457f1da0c93fed37fafc473be3799786fad35963b1e4f95fba7b386a56ef4ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457f1da0c93fed37fafc473be3799786fad35963b1e4f95fba7b386a56ef4ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457f1da0c93fed37fafc473be3799786fad35963b1e4f95fba7b386a56ef4ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457f1da0c93fed37fafc473be3799786fad35963b1e4f95fba7b386a56ef4ea/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:09 localhost podman[31604]: 2025-12-02 07:47:09.168952937 +0000 UTC m=+0.193982286 container init 3886dff7ff9b490471697e906f326979cbdf63a2e30cddc8480dbb69bd74263a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=) Dec 2 02:47:09 localhost podman[31604]: 2025-12-02 07:47:09.178495687 +0000 UTC m=+0.203525096 container start 3886dff7ff9b490471697e906f326979cbdf63a2e30cddc8480dbb69bd74263a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 2 02:47:09 localhost bash[31604]: 3886dff7ff9b490471697e906f326979cbdf63a2e30cddc8480dbb69bd74263a Dec 2 02:47:09 localhost systemd[1]: Started Ceph osd.0 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:47:09 localhost ceph-osd[31622]: set uid:gid to 167:167 (ceph:ceph) Dec 2 02:47:09 localhost ceph-osd[31622]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 2 02:47:09 localhost ceph-osd[31622]: pidfile_write: ignore empty --pid-file Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:09 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:09 localhost ceph-osd[31622]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) close Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) close Dec 2 02:47:09 localhost ceph-osd[31622]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal Dec 2 02:47:09 localhost ceph-osd[31622]: load: jerasure load: lrc Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:09 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:09 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) close Dec 2 02:47:10 localhost podman[31714]: Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:10.017583267 +0000 UTC m=+0.055400330 container create e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, io.buildah.version=1.41.4, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) close Dec 2 02:47:10 localhost systemd[1]: Started libpod-conmon-e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0.scope. Dec 2 02:47:10 localhost systemd[1]: Started libcrun container. Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:09.995714411 +0000 UTC m=+0.033531514 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:10.099820007 +0000 UTC m=+0.137637100 container init e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:10.108222926 +0000 UTC m=+0.146040029 container start e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, version=7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main) Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:10.108435732 +0000 UTC m=+0.146252835 container attach e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main) Dec 2 02:47:10 localhost peaceful_johnson[31733]: 167 167 Dec 2 02:47:10 localhost systemd[1]: libpod-e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0.scope: Deactivated successfully. Dec 2 02:47:10 localhost podman[31714]: 2025-12-02 07:47:10.113020247 +0000 UTC m=+0.150837340 container died e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph) Dec 2 02:47:10 localhost systemd[1]: var-lib-containers-storage-overlay-7fcfaf8331c0e0e17ef8d2ce0e4e33b8252767c22528a3ca07b06254ca1ef80e-merged.mount: Deactivated successfully. Dec 2 02:47:10 localhost podman[31738]: 2025-12-02 07:47:10.194947278 +0000 UTC m=+0.069412142 container remove e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_johnson, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:10 localhost systemd[1]: libpod-conmon-e5931ac9c15211a2493a684385e23fac84ef4cb392fab3d6f162ba4b3299a9e0.scope: Deactivated successfully. Dec 2 02:47:10 localhost ceph-osd[31622]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3ae00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs mount Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs mount shared_bdev_used = 0 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Git sha 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DB SUMMARY Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DB Session ID: RL381G0UN127R7VJTA20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.env: 0x5581cb95fc70 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.info_log: 0x5581cbad6be0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.statistics: (nil) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_log_dir: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_manager: 0x5581cab24140 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.row_cache: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Compression algorithms supported: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6da0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab12850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6fc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6fc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbad6fc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a7df2b79-a8f8-4f57-b14e-09b951f22d3a Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630327127, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630327393, "job": 1, "event": "recovery_finished"} Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000 Dec 2 02:47:10 localhost ceph-osd[31622]: freelist init Dec 2 02:47:10 localhost ceph-osd[31622]: freelist _read_cfg Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs umount Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) close Dec 2 02:47:10 localhost podman[31962]: Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.507162835 +0000 UTC m=+0.066097992 container create 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph) Dec 2 02:47:10 localhost systemd[1]: Started libpod-conmon-1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d.scope. Dec 2 02:47:10 localhost systemd[1]: Started libcrun container. Dec 2 02:47:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb4002e1e42fbaa3cc2086fd25a5a1d9772e0ff1a5838be07ad7ff814184c2e/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb4002e1e42fbaa3cc2086fd25a5a1d9772e0ff1a5838be07ad7ff814184c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb4002e1e42fbaa3cc2086fd25a5a1d9772e0ff1a5838be07ad7ff814184c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.48458477 +0000 UTC m=+0.043519947 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb4002e1e42fbaa3cc2086fd25a5a1d9772e0ff1a5838be07ad7ff814184c2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument Dec 2 02:47:10 localhost ceph-osd[31622]: bdev(0x5581cab3b180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs mount Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:10 localhost ceph-osd[31622]: bluefs mount shared_bdev_used = 4718592 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Git sha 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DB SUMMARY Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DB Session ID: RL381G0UN127R7VJTA21 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.env: 0x5581cabc6380 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.info_log: 0x5581cbb4f640 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.statistics: (nil) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_log_dir: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_manager: 0x5581cab25540 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.row_cache: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb4002e1e42fbaa3cc2086fd25a5a1d9772e0ff1a5838be07ad7ff814184c2e/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Compression algorithms supported: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.609238136 +0000 UTC m=+0.168173313 container init 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.619827224 +0000 UTC m=+0.178762381 container start 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4e080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab122d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.620737239 +0000 UTC m=+0.179672416 container attach 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, name=rhceph, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, RELEASE=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc.) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4f2c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab13610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4f2c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab13610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.merge_operator: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5581cbb4f2c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5581cab13610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression: LZ4 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.num_levels: 7 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a7df2b79-a8f8-4f57-b14e-09b951f22d3a Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630622344, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630628467, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7df2b79-a8f8-4f57-b14e-09b951f22d3a", "db_session_id": "RL381G0UN127R7VJTA21", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630632431, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7df2b79-a8f8-4f57-b14e-09b951f22d3a", "db_session_id": "RL381G0UN127R7VJTA21", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630636350, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661630, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7df2b79-a8f8-4f57-b14e-09b951f22d3a", "db_session_id": "RL381G0UN127R7VJTA21", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630640263, "job": 1, "event": "recovery_finished"} Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5581cabd8380 Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: DB pointer 0x5581cba33a00 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4 Dec 2 02:47:10 localhost ceph-osd[31622]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Dec 2 02:47:10 localhost ceph-osd[31622]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 2 02:47:10 localhost ceph-osd[31622]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 2 02:47:10 localhost ceph-osd[31622]: _get_class not permitted to load lua Dec 2 02:47:10 localhost ceph-osd[31622]: _get_class not permitted to load sdk Dec 2 02:47:10 localhost ceph-osd[31622]: _get_class not permitted to load test_remote_reads Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 load_pgs Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 load_pgs opened 0 pgs Dec 2 02:47:10 localhost ceph-osd[31622]: osd.0 0 log_to_monitors true Dec 2 02:47:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0[31618]: 2025-12-02T07:47:10.674+0000 7f80a00e9a80 -1 osd.0 0 log_to_monitors true Dec 2 02:47:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test[31977]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 2 02:47:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test[31977]: [--no-systemd] [--no-tmpfs] Dec 2 02:47:10 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test[31977]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 2 02:47:10 localhost systemd[1]: libpod-1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d.scope: Deactivated successfully. Dec 2 02:47:10 localhost podman[31962]: 2025-12-02 07:47:10.841838142 +0000 UTC m=+0.400773329 container died 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, ceph=True, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Dec 2 02:47:10 localhost podman[32198]: 2025-12-02 07:47:10.918380777 +0000 UTC m=+0.065636658 container remove 1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate-test, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:47:10 localhost systemd[1]: libpod-conmon-1ee6409ad0c389872f4246af8672e412adf06162189a0e98e03b6f35d069a11d.scope: Deactivated successfully. Dec 2 02:47:11 localhost systemd[1]: Reloading. Dec 2 02:47:11 localhost systemd-sysv-generator[32255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:11 localhost systemd-rc-local-generator[32251]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:11 localhost systemd[1]: Reloading. Dec 2 02:47:11 localhost systemd-rc-local-generator[32294]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:47:11 localhost systemd-sysv-generator[32299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:47:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:47:11 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 2 02:47:11 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 2 02:47:11 localhost systemd[1]: Starting Ceph osd.3 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 02:47:12 localhost podman[32359]: Dec 2 02:47:12 localhost podman[32359]: 2025-12-02 07:47:12.066604669 +0000 UTC m=+0.070488331 container create 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, version=7) Dec 2 02:47:12 localhost systemd[1]: Started libcrun container. Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost podman[32359]: 2025-12-02 07:47:12.038293598 +0000 UTC m=+0.042177270 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:12 localhost podman[32359]: 2025-12-02 07:47:12.179900206 +0000 UTC m=+0.183783858 container init 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Dec 2 02:47:12 localhost podman[32359]: 2025-12-02 07:47:12.190136604 +0000 UTC m=+0.194020266 container start 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph) Dec 2 02:47:12 localhost podman[32359]: 2025-12-02 07:47:12.190467114 +0000 UTC m=+0.194350766 container attach 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z) Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 done with init, starting boot process Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 start_boot Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 2 02:47:12 localhost ceph-osd[31622]: osd.0 0 bench count 12288000 bsize 4 KiB Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:12 localhost bash[32359]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 Dec 2 02:47:12 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate[32373]: --> ceph-volume raw activate successful for osd ID: 3 Dec 2 02:47:12 localhost bash[32359]: --> ceph-volume raw activate successful for osd ID: 3 Dec 2 02:47:12 localhost systemd[1]: libpod-3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e.scope: Deactivated successfully. Dec 2 02:47:12 localhost podman[32504]: 2025-12-02 07:47:12.914370126 +0000 UTC m=+0.054702562 container died 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:13 localhost podman[32504]: 2025-12-02 07:47:13.044270194 +0000 UTC m=+0.184602600 container remove 3a41e61c6b1328ff22fb256c600bf665043ad9bde1cbbf1766e9a54f0124238e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3-activate, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, RELEASE=main, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 2 02:47:13 localhost systemd[1]: var-lib-containers-storage-overlay-74262ca50bbe4ea2feed3107115f357b8ff80a7f451ed5d28b11548cbd9ebcf9-merged.mount: Deactivated successfully. Dec 2 02:47:13 localhost podman[32564]: Dec 2 02:47:13 localhost podman[32564]: 2025-12-02 07:47:13.281749234 +0000 UTC m=+0.070790949 container create effc5649c674e91178ff79d0d995136974a324018af8217643cd4efac175683e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True) Dec 2 02:47:13 localhost podman[32564]: 2025-12-02 07:47:13.242780152 +0000 UTC m=+0.031821907 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6545d2ae4e904d25466047e0fcd96d182ed72298d5730c999cd5c073a92284b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6545d2ae4e904d25466047e0fcd96d182ed72298d5730c999cd5c073a92284b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6545d2ae4e904d25466047e0fcd96d182ed72298d5730c999cd5c073a92284b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6545d2ae4e904d25466047e0fcd96d182ed72298d5730c999cd5c073a92284b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6545d2ae4e904d25466047e0fcd96d182ed72298d5730c999cd5c073a92284b/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:13 localhost podman[32564]: 2025-12-02 07:47:13.449135445 +0000 UTC m=+0.238177160 container init effc5649c674e91178ff79d0d995136974a324018af8217643cd4efac175683e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main) Dec 2 02:47:13 localhost podman[32564]: 2025-12-02 07:47:13.481134266 +0000 UTC m=+0.270176061 container start effc5649c674e91178ff79d0d995136974a324018af8217643cd4efac175683e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3, io.buildah.version=1.41.4, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Dec 2 02:47:13 localhost bash[32564]: effc5649c674e91178ff79d0d995136974a324018af8217643cd4efac175683e Dec 2 02:47:13 localhost systemd[1]: Started Ceph osd.3 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 02:47:13 localhost ceph-osd[32582]: set uid:gid to 167:167 (ceph:ceph) Dec 2 02:47:13 localhost ceph-osd[32582]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 2 02:47:13 localhost ceph-osd[32582]: pidfile_write: ignore empty --pid-file Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:13 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:13 localhost ceph-osd[32582]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) close Dec 2 02:47:13 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) close Dec 2 02:47:14 localhost ceph-osd[32582]: starting osd.3 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal Dec 2 02:47:14 localhost ceph-osd[32582]: load: jerasure load: lrc Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) close Dec 2 02:47:14 localhost podman[32674]: Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.276475864 +0000 UTC m=+0.072009863 container create 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, RELEASE=main) Dec 2 02:47:14 localhost systemd[1]: Started libpod-conmon-732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d.scope. Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) close Dec 2 02:47:14 localhost systemd[1]: Started libcrun container. Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.249019666 +0000 UTC m=+0.044553635 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.377082945 +0000 UTC m=+0.172616914 container init 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Dec 2 02:47:14 localhost quirky_ganguly[32687]: 167 167 Dec 2 02:47:14 localhost systemd[1]: libpod-732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d.scope: Deactivated successfully. Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.407777152 +0000 UTC m=+0.203311131 container start 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, release=1763362218) Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.408338057 +0000 UTC m=+0.203872036 container attach 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, name=rhceph, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True) Dec 2 02:47:14 localhost podman[32674]: 2025-12-02 07:47:14.411077501 +0000 UTC m=+0.206611470 container died 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 02:47:14 localhost systemd[1]: var-lib-containers-storage-overlay-b5cfded5dd0d470e991bef72eeebcd268ef284defdab52262bd3a7aa992ff5c8-merged.mount: Deactivated successfully. Dec 2 02:47:14 localhost podman[32696]: 2025-12-02 07:47:14.515069054 +0000 UTC m=+0.111717784 container remove 732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_ganguly, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, architecture=x86_64) Dec 2 02:47:14 localhost systemd[1]: libpod-conmon-732b4cb93825d980d158b91aa225db1c98b5052aebe04cf979d8b639f8c78c3d.scope: Deactivated successfully. Dec 2 02:47:14 localhost ceph-osd[32582]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 2 02:47:14 localhost ceph-osd[32582]: osd.3:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfae00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs mount Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs mount shared_bdev_used = 0 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Git sha 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DB SUMMARY Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DB Session ID: FHL87VSJHB1TI1XONKXI Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.env: 0x56524408ec40 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.info_log: 0x565244d9c780 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.statistics: (nil) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_log_dir: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_manager: 0x565243de4140 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.row_cache: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Compression algorithms supported: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9c940)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd2850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cb60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fce38134-5a74-433d-a8c4-f491f68a5a3b Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634645380, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634645555, "job": 1, "event": "recovery_finished"} Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old nid_max 1025 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old blobid_max 10240 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta min_alloc_size 0x1000 Dec 2 02:47:14 localhost ceph-osd[32582]: freelist init Dec 2 02:47:14 localhost ceph-osd[32582]: freelist _read_cfg Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs umount Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) close Dec 2 02:47:14 localhost podman[32910]: Dec 2 02:47:14 localhost podman[32910]: 2025-12-02 07:47:14.726471504 +0000 UTC m=+0.069764461 container create 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True) Dec 2 02:47:14 localhost systemd[1]: Started libpod-conmon-0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68.scope. Dec 2 02:47:14 localhost systemd[1]: Started libcrun container. Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d319a04eb84dc57153a6a551fb3dd77eaba1c3870937f82853f3d915426996/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost podman[32910]: 2025-12-02 07:47:14.693741432 +0000 UTC m=+0.037034369 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d319a04eb84dc57153a6a551fb3dd77eaba1c3870937f82853f3d915426996/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d319a04eb84dc57153a6a551fb3dd77eaba1c3870937f82853f3d915426996/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:14 localhost podman[32910]: 2025-12-02 07:47:14.829217384 +0000 UTC m=+0.172510361 container init 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 02:47:14 localhost podman[32910]: 2025-12-02 07:47:14.861634346 +0000 UTC m=+0.204927323 container start 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=) Dec 2 02:47:14 localhost podman[32910]: 2025-12-02 07:47:14.862010676 +0000 UTC m=+0.205303653 container attach 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=) Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument Dec 2 02:47:14 localhost ceph-osd[32582]: bdev(0x565243dfb180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs mount Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 2 02:47:14 localhost ceph-osd[32582]: bluefs mount shared_bdev_used = 4718592 Dec 2 02:47:14 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: RocksDB version: 7.9.2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Git sha 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DB SUMMARY Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DB Session ID: FHL87VSJHB1TI1XONKXJ Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: CURRENT file: CURRENT Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: IDENTITY file: IDENTITY Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.error_if_exists: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.create_if_missing: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.env: 0x565243f20a80 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.fs: LegacyFileSystem Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.info_log: 0x565243f0e460 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.statistics: (nil) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_fsync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_log_file_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_fallocate: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_direct_reads: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.create_missing_column_families: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_log_dir: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_dir: db.wal Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.advise_random_on_open: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_manager: 0x565243de4140 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.rate_limiter: (nil) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.unordered_write: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.row_cache: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.two_write_queues: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.manual_wal_flush: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_compression: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.atomic_flush: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.log_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.db_host_id: __hostname__ Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_jobs: 4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_compactions: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_subcompactions: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_open_files: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_background_flushes: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Compression algorithms supported: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZSTD supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kXpressCompression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kZlibCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565244d9cd00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd22d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565243f0e820)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565243f0e820)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.merge_operator: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_filter_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.sst_partitioner_factory: None Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565243f0e820)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x565243dd3610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.write_buffer_size: 16777216 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number: 64 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression: LZ4 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression: Disabled Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.num_levels: 7 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.level: 32767 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.enabled: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.arena_block_size: 1048576 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_support: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.bloom_locality: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.max_successive_merges: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.force_consistency_checks: 1 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.ttl: 2592000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_files: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.min_blob_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_size: 268435456 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fce38134-5a74-433d-a8c4-f491f68a5a3b Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634901542, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634922225, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661634, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fce38134-5a74-433d-a8c4-f491f68a5a3b", "db_session_id": "FHL87VSJHB1TI1XONKXJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634926404, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 463, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661634, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fce38134-5a74-433d-a8c4-f491f68a5a3b", "db_session_id": "FHL87VSJHB1TI1XONKXJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634955485, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661634, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fce38134-5a74-433d-a8c4-f491f68a5a3b", "db_session_id": "FHL87VSJHB1TI1XONKXJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661634961041, "job": 1, "event": "recovery_finished"} Dec 2 02:47:14 localhost ceph-osd[32582]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 2 02:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x565243e3a700 Dec 2 02:47:15 localhost ceph-osd[32582]: rocksdb: DB pointer 0x565244cf3a00 Dec 2 02:47:15 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 2 02:47:15 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super from 4, latest 4 Dec 2 02:47:15 localhost ceph-osd[32582]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super done Dec 2 02:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 460.80 MB usag Dec 2 02:47:15 localhost ceph-osd[32582]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 2 02:47:15 localhost ceph-osd[32582]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 2 02:47:15 localhost ceph-osd[32582]: _get_class not permitted to load lua Dec 2 02:47:15 localhost ceph-osd[32582]: _get_class not permitted to load sdk Dec 2 02:47:15 localhost ceph-osd[32582]: _get_class not permitted to load test_remote_reads Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 load_pgs Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 load_pgs opened 0 pgs Dec 2 02:47:15 localhost ceph-osd[32582]: osd.3 0 log_to_monitors true Dec 2 02:47:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3[32578]: 2025-12-02T07:47:15.055+0000 7f6818a06a80 -1 osd.3 0 log_to_monitors true Dec 2 02:47:15 localhost condescending_khayyam[32926]: { Dec 2 02:47:15 localhost condescending_khayyam[32926]: "580fd654-ce1e-4384-8610-e58c3d508de1": { Dec 2 02:47:15 localhost condescending_khayyam[32926]: "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "osd_id": 3, Dec 2 02:47:15 localhost condescending_khayyam[32926]: "osd_uuid": "580fd654-ce1e-4384-8610-e58c3d508de1", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "type": "bluestore" Dec 2 02:47:15 localhost condescending_khayyam[32926]: }, Dec 2 02:47:15 localhost condescending_khayyam[32926]: "79866ec3-47a0-4109-900e-7f4b902017d5": { Dec 2 02:47:15 localhost condescending_khayyam[32926]: "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "osd_id": 0, Dec 2 02:47:15 localhost condescending_khayyam[32926]: "osd_uuid": "79866ec3-47a0-4109-900e-7f4b902017d5", Dec 2 02:47:15 localhost condescending_khayyam[32926]: "type": "bluestore" Dec 2 02:47:15 localhost condescending_khayyam[32926]: } Dec 2 02:47:15 localhost condescending_khayyam[32926]: } Dec 2 02:47:15 localhost systemd[1]: libpod-0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68.scope: Deactivated successfully. Dec 2 02:47:15 localhost podman[32910]: 2025-12-02 07:47:15.409434911 +0000 UTC m=+0.752727928 container died 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:15 localhost systemd[1]: tmp-crun.jIHSbu.mount: Deactivated successfully. Dec 2 02:47:15 localhost systemd[1]: var-lib-containers-storage-overlay-f6d319a04eb84dc57153a6a551fb3dd77eaba1c3870937f82853f3d915426996-merged.mount: Deactivated successfully. Dec 2 02:47:15 localhost podman[33177]: 2025-12-02 07:47:15.516537039 +0000 UTC m=+0.097457557 container remove 0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_khayyam, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:15 localhost systemd[1]: libpod-conmon-0691cc376268e2dcc04c0bc8a30996bde72c3b7924fd317ec7fa2af03d4d7e68.scope: Deactivated successfully. Dec 2 02:47:16 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 2 02:47:16 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.116 iops: 6429.754 elapsed_sec: 0.467 Dec 2 02:47:16 localhost ceph-osd[31622]: log_channel(cluster) log [WRN] : OSD bench result of 6429.754338 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 0 waiting for initial osdmap Dec 2 02:47:16 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0[31618]: 2025-12-02T07:47:16.157+0000 7f809c87d640 -1 osd.0 0 waiting for initial osdmap Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 crush map has features 288514050185494528, adjusting msgr requires for clients Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 crush map has features 3314932999778484224, adjusting msgr requires for osds Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 check_osdmap_features require_osd_release unknown -> reef Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 set_numa_affinity not setting numa affinity Dec 2 02:47:16 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-0[31618]: 2025-12-02T07:47:16.173+0000 7f8097692640 -1 osd.0 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:16 localhost ceph-osd[31622]: osd.0 12 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Dec 2 02:47:17 localhost ceph-osd[31622]: osd.0 13 state: booting -> active Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 done with init, starting boot process Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 start_boot Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 2 02:47:17 localhost ceph-osd[32582]: osd.3 0 bench count 12288000 bsize 4 KiB Dec 2 02:47:18 localhost podman[33308]: 2025-12-02 07:47:18.5657752 +0000 UTC m=+0.090248029 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 2 02:47:18 localhost podman[33308]: 2025-12-02 07:47:18.696750419 +0000 UTC m=+0.221223238 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, vcs-type=git, release=1763362218, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main) Dec 2 02:47:20 localhost ceph-osd[31622]: osd.0 16 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 2 02:47:20 localhost ceph-osd[31622]: osd.0 16 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Dec 2 02:47:20 localhost ceph-osd[31622]: osd.0 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 2 02:47:20 localhost podman[33499]: Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.557432601 +0000 UTC m=+0.057123337 container create c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public) Dec 2 02:47:20 localhost systemd[1]: Started libpod-conmon-c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8.scope. Dec 2 02:47:20 localhost systemd[1]: Started libcrun container. Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.618285569 +0000 UTC m=+0.117976275 container init c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.623912762 +0000 UTC m=+0.123603468 container start c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux ) Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.624098037 +0000 UTC m=+0.123788793 container attach c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Dec 2 02:47:20 localhost dreamy_nash[33514]: 167 167 Dec 2 02:47:20 localhost systemd[1]: libpod-c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8.scope: Deactivated successfully. Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.628148698 +0000 UTC m=+0.127839464 container died c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 2 02:47:20 localhost podman[33499]: 2025-12-02 07:47:20.53057905 +0000 UTC m=+0.030269756 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:20 localhost systemd[1]: tmp-crun.tb91cZ.mount: Deactivated successfully. Dec 2 02:47:20 localhost podman[33519]: 2025-12-02 07:47:20.706385609 +0000 UTC m=+0.065719681 container remove c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_nash, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.541 iops: 6538.433 elapsed_sec: 0.459 Dec 2 02:47:20 localhost systemd[1]: libpod-conmon-c54287c3e6079dc8824e725de759ac7890775e3173baec7c281b45046bccd6c8.scope: Deactivated successfully. Dec 2 02:47:20 localhost ceph-osd[32582]: log_channel(cluster) log [WRN] : OSD bench result of 6538.432602 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 0 waiting for initial osdmap Dec 2 02:47:20 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3[32578]: 2025-12-02T07:47:20.707+0000 7f6814985640 -1 osd.3 0 waiting for initial osdmap Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 check_osdmap_features require_osd_release unknown -> reef Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 set_numa_affinity not setting numa affinity Dec 2 02:47:20 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-3[32578]: 2025-12-02T07:47:20.727+0000 7f680ffaf640 -1 osd.3 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 2 02:47:20 localhost ceph-osd[32582]: osd.3 16 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Dec 2 02:47:20 localhost podman[33540]: Dec 2 02:47:20 localhost podman[33540]: 2025-12-02 07:47:20.867110628 +0000 UTC m=+0.072224899 container create 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Dec 2 02:47:20 localhost systemd[1]: Started libpod-conmon-9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946.scope. Dec 2 02:47:20 localhost systemd[1]: Started libcrun container. Dec 2 02:47:20 localhost podman[33540]: 2025-12-02 07:47:20.83783808 +0000 UTC m=+0.042952411 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 02:47:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3f922765012848beb9047fd6f99789646bd916897a8ada6d5311e94b7a5f7/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3f922765012848beb9047fd6f99789646bd916897a8ada6d5311e94b7a5f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b3f922765012848beb9047fd6f99789646bd916897a8ada6d5311e94b7a5f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 02:47:20 localhost podman[33540]: 2025-12-02 07:47:20.974510464 +0000 UTC m=+0.179624745 container init 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 02:47:20 localhost podman[33540]: 2025-12-02 07:47:20.983779746 +0000 UTC m=+0.188894027 container start 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, release=1763362218, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:20 localhost podman[33540]: 2025-12-02 07:47:20.984839306 +0000 UTC m=+0.189953577 container attach 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 2 02:47:21 localhost ceph-osd[32582]: osd.3 17 state: booting -> active Dec 2 02:47:21 localhost systemd[1]: var-lib-containers-storage-overlay-8a69e68136ced8377d960a81f292b992785ccdddd6f62f8a188f5c9221a01ccc-merged.mount: Deactivated successfully. Dec 2 02:47:21 localhost trusting_swanson[33555]: [ Dec 2 02:47:21 localhost trusting_swanson[33555]: { Dec 2 02:47:21 localhost trusting_swanson[33555]: "available": false, Dec 2 02:47:21 localhost trusting_swanson[33555]: "ceph_device": false, Dec 2 02:47:21 localhost trusting_swanson[33555]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 02:47:21 localhost trusting_swanson[33555]: "lsm_data": {}, Dec 2 02:47:21 localhost trusting_swanson[33555]: "lvs": [], Dec 2 02:47:21 localhost trusting_swanson[33555]: "path": "/dev/sr0", Dec 2 02:47:21 localhost trusting_swanson[33555]: "rejected_reasons": [ Dec 2 02:47:21 localhost trusting_swanson[33555]: "Has a FileSystem", Dec 2 02:47:21 localhost trusting_swanson[33555]: "Insufficient space (<5GB)" Dec 2 02:47:21 localhost trusting_swanson[33555]: ], Dec 2 02:47:21 localhost trusting_swanson[33555]: "sys_api": { Dec 2 02:47:21 localhost trusting_swanson[33555]: "actuators": null, Dec 2 02:47:21 localhost trusting_swanson[33555]: "device_nodes": "sr0", Dec 2 02:47:21 localhost trusting_swanson[33555]: "human_readable_size": "482.00 KB", Dec 2 02:47:21 localhost trusting_swanson[33555]: "id_bus": "ata", Dec 2 02:47:21 localhost trusting_swanson[33555]: "model": "QEMU DVD-ROM", Dec 2 02:47:21 localhost trusting_swanson[33555]: "nr_requests": "2", Dec 2 02:47:21 localhost trusting_swanson[33555]: "partitions": {}, Dec 2 02:47:21 localhost trusting_swanson[33555]: "path": "/dev/sr0", Dec 2 02:47:21 localhost trusting_swanson[33555]: "removable": "1", Dec 2 02:47:21 localhost trusting_swanson[33555]: "rev": "2.5+", Dec 2 02:47:21 localhost trusting_swanson[33555]: "ro": "0", Dec 2 02:47:21 localhost trusting_swanson[33555]: "rotational": "1", Dec 2 02:47:21 localhost trusting_swanson[33555]: "sas_address": "", Dec 2 02:47:21 localhost trusting_swanson[33555]: "sas_device_handle": "", Dec 2 02:47:21 localhost trusting_swanson[33555]: "scheduler_mode": "mq-deadline", Dec 2 02:47:21 localhost trusting_swanson[33555]: "sectors": 0, Dec 2 02:47:21 localhost trusting_swanson[33555]: "sectorsize": "2048", Dec 2 02:47:21 localhost trusting_swanson[33555]: "size": 493568.0, Dec 2 02:47:21 localhost trusting_swanson[33555]: "support_discard": "0", Dec 2 02:47:21 localhost trusting_swanson[33555]: "type": "disk", Dec 2 02:47:21 localhost trusting_swanson[33555]: "vendor": "QEMU" Dec 2 02:47:21 localhost trusting_swanson[33555]: } Dec 2 02:47:21 localhost trusting_swanson[33555]: } Dec 2 02:47:21 localhost trusting_swanson[33555]: ] Dec 2 02:47:21 localhost systemd[1]: libpod-9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946.scope: Deactivated successfully. Dec 2 02:47:21 localhost podman[33540]: 2025-12-02 07:47:21.749920729 +0000 UTC m=+0.955035040 container died 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 02:47:21 localhost systemd[1]: var-lib-containers-storage-overlay-60b3f922765012848beb9047fd6f99789646bd916897a8ada6d5311e94b7a5f7-merged.mount: Deactivated successfully. Dec 2 02:47:21 localhost podman[34852]: 2025-12-02 07:47:21.844775444 +0000 UTC m=+0.080912786 container remove 9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_swanson, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-type=git, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 02:47:21 localhost systemd[1]: libpod-conmon-9cd05049eef7e9afe599ca51ffb7d2e13c1e405ecc519b3b1d1124396e074946.scope: Deactivated successfully. Dec 2 02:47:23 localhost ceph-osd[32582]: osd.3 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=2 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 02:47:31 localhost systemd[25916]: Starting Mark boot as successful... Dec 2 02:47:31 localhost podman[34980]: 2025-12-02 07:47:31.459043425 +0000 UTC m=+0.118087788 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True) Dec 2 02:47:31 localhost systemd[25916]: Finished Mark boot as successful. Dec 2 02:47:31 localhost podman[34980]: 2025-12-02 07:47:31.594761912 +0000 UTC m=+0.253806285 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Dec 2 02:48:33 localhost systemd[1]: tmp-crun.T4K8gV.mount: Deactivated successfully. Dec 2 02:48:33 localhost podman[35157]: 2025-12-02 07:48:33.435885848 +0000 UTC m=+0.093363104 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Dec 2 02:48:33 localhost podman[35157]: 2025-12-02 07:48:33.566335473 +0000 UTC m=+0.223812719 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph) Dec 2 02:48:42 localhost systemd-logind[757]: Session 14 logged out. Waiting for processes to exit. Dec 2 02:48:42 localhost systemd[1]: session-14.scope: Deactivated successfully. Dec 2 02:48:42 localhost systemd[1]: session-14.scope: Consumed 21.488s CPU time. Dec 2 02:48:42 localhost systemd-logind[757]: Removed session 14. Dec 2 02:51:16 localhost systemd[25916]: Created slice User Background Tasks Slice. Dec 2 02:51:16 localhost systemd[25916]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 02:51:16 localhost systemd[25916]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 02:52:06 localhost sshd[35534]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:06 localhost systemd-logind[757]: New session 28 of user zuul. Dec 2 02:52:06 localhost systemd[1]: Started Session 28 of User zuul. Dec 2 02:52:07 localhost python3[35582]: ansible-ansible.legacy.ping Invoked with data=pong Dec 2 02:52:07 localhost python3[35627]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:52:08 localhost python3[35647]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:52:08 localhost python3[35703]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:52:09 localhost python3[35746]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764661928.636323-65475-121280229714750/source _original_basename=tmpgqowd7s2 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:09 localhost python3[35776]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:10 localhost python3[35792]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:10 localhost python3[35808]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:11 localhost python3[35824]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:11 localhost python3[35838]: ansible-ping Invoked with data=pong Dec 2 02:52:22 localhost sshd[35839]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:52:22 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 02:52:22 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 02:52:22 localhost systemd-logind[757]: New session 29 of user tripleo-admin. Dec 2 02:52:22 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 02:52:22 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 02:52:22 localhost systemd[35843]: Queued start job for default target Main User Target. Dec 2 02:52:22 localhost systemd[35843]: Created slice User Application Slice. Dec 2 02:52:22 localhost systemd[35843]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 02:52:22 localhost systemd[35843]: Started Daily Cleanup of User's Temporary Directories. Dec 2 02:52:22 localhost systemd[35843]: Reached target Paths. Dec 2 02:52:22 localhost systemd[35843]: Reached target Timers. Dec 2 02:52:22 localhost systemd[35843]: Starting D-Bus User Message Bus Socket... Dec 2 02:52:22 localhost systemd[35843]: Starting Create User's Volatile Files and Directories... Dec 2 02:52:22 localhost systemd[35843]: Listening on D-Bus User Message Bus Socket. Dec 2 02:52:22 localhost systemd[35843]: Finished Create User's Volatile Files and Directories. Dec 2 02:52:22 localhost systemd[35843]: Reached target Sockets. Dec 2 02:52:22 localhost systemd[35843]: Reached target Basic System. Dec 2 02:52:22 localhost systemd[35843]: Reached target Main User Target. Dec 2 02:52:22 localhost systemd[35843]: Startup finished in 136ms. Dec 2 02:52:22 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 02:52:22 localhost systemd[1]: Started Session 29 of User tripleo-admin. Dec 2 02:52:23 localhost python3[35904]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 2 02:52:28 localhost python3[35924]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Dec 2 02:52:29 localhost python3[35940]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 2 02:52:29 localhost python3[35988]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.b2zqkx1htmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:30 localhost python3[36018]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.b2zqkx1htmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:31 localhost python3[36034]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.b2zqkx1htmphosts insertbefore=BOF block=172.17.0.106 np0005541912.localdomain np0005541912#012172.18.0.106 np0005541912.storage.localdomain np0005541912.storage#012172.20.0.106 np0005541912.storagemgmt.localdomain np0005541912.storagemgmt#012172.17.0.106 np0005541912.internalapi.localdomain np0005541912.internalapi#012172.19.0.106 np0005541912.tenant.localdomain np0005541912.tenant#012192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane#012172.17.0.107 np0005541913.localdomain np0005541913#012172.18.0.107 np0005541913.storage.localdomain np0005541913.storage#012172.20.0.107 np0005541913.storagemgmt.localdomain np0005541913.storagemgmt#012172.17.0.107 np0005541913.internalapi.localdomain np0005541913.internalapi#012172.19.0.107 np0005541913.tenant.localdomain np0005541913.tenant#012192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane#012172.17.0.108 np0005541914.localdomain np0005541914#012172.18.0.108 np0005541914.storage.localdomain np0005541914.storage#012172.20.0.108 np0005541914.storagemgmt.localdomain np0005541914.storagemgmt#012172.17.0.108 np0005541914.internalapi.localdomain np0005541914.internalapi#012172.19.0.108 np0005541914.tenant.localdomain np0005541914.tenant#012192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane#012172.17.0.103 np0005541909.localdomain np0005541909#012172.18.0.103 np0005541909.storage.localdomain np0005541909.storage#012172.20.0.103 np0005541909.storagemgmt.localdomain np0005541909.storagemgmt#012172.17.0.103 np0005541909.internalapi.localdomain np0005541909.internalapi#012172.19.0.103 np0005541909.tenant.localdomain np0005541909.tenant#012192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane#012172.17.0.104 np0005541910.localdomain np0005541910#012172.18.0.104 np0005541910.storage.localdomain np0005541910.storage#012172.20.0.104 np0005541910.storagemgmt.localdomain np0005541910.storagemgmt#012172.17.0.104 np0005541910.internalapi.localdomain np0005541910.internalapi#012172.19.0.104 np0005541910.tenant.localdomain np0005541910.tenant#012192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane#012172.17.0.105 np0005541911.localdomain np0005541911#012172.18.0.105 np0005541911.storage.localdomain np0005541911.storage#012172.20.0.105 np0005541911.storagemgmt.localdomain np0005541911.storagemgmt#012172.17.0.105 np0005541911.internalapi.localdomain np0005541911.internalapi#012172.19.0.105 np0005541911.tenant.localdomain np0005541911.tenant#012192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.121 overcloud.storage.localdomain#012172.20.0.222 overcloud.storagemgmt.localdomain#012172.17.0.136 overcloud.internalapi.localdomain#012172.21.0.241 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:31 localhost python3[36051]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.b2zqkx1htmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:32 localhost python3[36068]: ansible-file Invoked with path=/tmp/ansible.b2zqkx1htmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:52:33 localhost python3[36084]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:34 localhost python3[36101]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:52:38 localhost python3[36120]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:52:39 localhost python3[36137]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:53:53 localhost kernel: SELinux: Converting 2700 SID table entries... Dec 2 02:53:53 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:53:53 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:53:53 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:53:53 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:53:53 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:53:53 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:53:53 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:53:53 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=6 res=1 Dec 2 02:53:53 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:53:53 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:53:53 localhost systemd[1]: Reloading. Dec 2 02:53:53 localhost systemd-rc-local-generator[37209]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:53:53 localhost systemd-sysv-generator[37213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:53:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:53:54 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:53:54 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:53:54 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:53:54 localhost systemd[1]: run-r9bfe09271b094052ac972abeb413283b.service: Deactivated successfully. Dec 2 02:53:55 localhost python3[37644]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:53:57 localhost python3[37783]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:53:57 localhost systemd[1]: Reloading. Dec 2 02:53:57 localhost systemd-rc-local-generator[37813]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:53:57 localhost systemd-sysv-generator[37817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:53:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:53:57 localhost python3[37837]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:53:58 localhost python3[37853]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:53:59 localhost python3[37870]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 02:54:01 localhost python3[37888]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:01 localhost python3[37906]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:02 localhost python3[37924]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:54:02 localhost systemd[1]: Reloading Network Manager... Dec 2 02:54:02 localhost NetworkManager[5965]: [1764662042.4569] audit: op="reload" arg="0" pid=37927 uid=0 result="success" Dec 2 02:54:02 localhost NetworkManager[5965]: [1764662042.4581] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Dec 2 02:54:02 localhost NetworkManager[5965]: [1764662042.4582] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Dec 2 02:54:02 localhost systemd[1]: Reloaded Network Manager. Dec 2 02:54:02 localhost python3[37943]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:04 localhost python3[37960]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:04 localhost python3[37978]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:05 localhost python3[37994]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:06 localhost python3[38010]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 2 02:54:06 localhost python3[38026]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:07 localhost python3[38042]: ansible-blockinfile Invoked with path=/tmp/ansible.wl_m9jvj block=[192.168.122.106]*,[np0005541912.ctlplane.localdomain]*,[172.17.0.106]*,[np0005541912.internalapi.localdomain]*,[172.18.0.106]*,[np0005541912.storage.localdomain]*,[172.20.0.106]*,[np0005541912.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005541912.tenant.localdomain]*,[np0005541912.localdomain]*,[np0005541912]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=#012[192.168.122.107]*,[np0005541913.ctlplane.localdomain]*,[172.17.0.107]*,[np0005541913.internalapi.localdomain]*,[172.18.0.107]*,[np0005541913.storage.localdomain]*,[172.20.0.107]*,[np0005541913.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005541913.tenant.localdomain]*,[np0005541913.localdomain]*,[np0005541913]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=#012[192.168.122.108]*,[np0005541914.ctlplane.localdomain]*,[172.17.0.108]*,[np0005541914.internalapi.localdomain]*,[172.18.0.108]*,[np0005541914.storage.localdomain]*,[172.20.0.108]*,[np0005541914.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005541914.tenant.localdomain]*,[np0005541914.localdomain]*,[np0005541914]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=#012[192.168.122.103]*,[np0005541909.ctlplane.localdomain]*,[172.17.0.103]*,[np0005541909.internalapi.localdomain]*,[172.18.0.103]*,[np0005541909.storage.localdomain]*,[172.20.0.103]*,[np0005541909.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005541909.tenant.localdomain]*,[np0005541909.localdomain]*,[np0005541909]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=#012[192.168.122.104]*,[np0005541910.ctlplane.localdomain]*,[172.17.0.104]*,[np0005541910.internalapi.localdomain]*,[172.18.0.104]*,[np0005541910.storage.localdomain]*,[172.20.0.104]*,[np0005541910.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005541910.tenant.localdomain]*,[np0005541910.localdomain]*,[np0005541910]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=#012[192.168.122.105]*,[np0005541911.ctlplane.localdomain]*,[172.17.0.105]*,[np0005541911.internalapi.localdomain]*,[172.18.0.105]*,[np0005541911.storage.localdomain]*,[172.20.0.105]*,[np0005541911.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005541911.tenant.localdomain]*,[np0005541911.localdomain]*,[np0005541911]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:07 localhost python3[38058]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wl_m9jvj' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:08 localhost python3[38076]: ansible-file Invoked with path=/tmp/ansible.wl_m9jvj state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:09 localhost python3[38092]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 02:54:09 localhost python3[38108]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:09 localhost python3[38126]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:10 localhost python3[38145]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Dec 2 02:54:12 localhost python3[38282]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:13 localhost python3[38299]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:54:16 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:54:16 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:54:16 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:54:17 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:54:17 localhost systemd[1]: Reloading. Dec 2 02:54:17 localhost systemd-rc-local-generator[38368]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:54:17 localhost systemd-sysv-generator[38374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:54:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:54:17 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:54:17 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 02:54:17 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 02:54:17 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 02:54:17 localhost systemd[1]: tuned.service: Consumed 2.317s CPU time. Dec 2 02:54:17 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 02:54:17 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:54:17 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:54:17 localhost systemd[1]: run-r0c64b10bcab6453d9b822e7452915af9.service: Deactivated successfully. Dec 2 02:54:18 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 02:54:18 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:54:18 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:54:18 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:54:18 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:54:18 localhost systemd[1]: run-rd9c8c7bcc4064a7c8926f3cb3baa1060.service: Deactivated successfully. Dec 2 02:54:19 localhost python3[38735]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:54:19 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 02:54:20 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 02:54:20 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 02:54:20 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 02:54:21 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 02:54:21 localhost python3[38930]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:22 localhost python3[38947]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 2 02:54:22 localhost python3[38963]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:23 localhost python3[38979]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:25 localhost python3[38999]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:25 localhost python3[39016]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:28 localhost python3[39032]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:33 localhost python3[39048]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:33 localhost python3[39096]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:33 localhost python3[39141]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662073.255895-70027-258494704750880/source _original_basename=tmpqyal53v0 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:34 localhost python3[39171]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:34 localhost python3[39219]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:35 localhost python3[39262]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662074.462108-70101-256517337408385/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=303a9e8dd06eeb9157c66bb31355109aa4c872ae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:35 localhost python3[39324]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:35 localhost python3[39367]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662075.3088977-70338-118411139602545/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=da1c3b8584bf2231cac158ee0d91c3ea69fbb742 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:36 localhost python3[39429]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:36 localhost python3[39472]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662076.0728672-70338-200263392274181/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=cefd5bd69caea640bd56356af0b9c6878752d6a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:37 localhost python3[39534]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:37 localhost python3[39577]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662076.8803103-70338-74319233140249/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=175c760950d63a47f443f25b58088dba962f090b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:38 localhost python3[39639]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:38 localhost python3[39682]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662077.7541177-70338-239947092356002/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:38 localhost python3[39744]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:39 localhost python3[39787]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662078.5195305-70338-224258781882864/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=ee812c4410e77888a2aa029c6a63e712c30d05b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:39 localhost python3[39849]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:40 localhost python3[39892]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662079.3448932-70338-67530307605258/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:40 localhost python3[39954]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:40 localhost python3[39997]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662080.2452297-70338-90429950494491/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=c605747c28ed219c21bc7a334ba3c66112b9a2b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:41 localhost python3[40059]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:41 localhost python3[40102]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662081.0901384-70338-232170586210173/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:42 localhost python3[40164]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:42 localhost python3[40207]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662081.939774-70338-72770872888350/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:43 localhost python3[40299]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:43 localhost python3[40368]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662082.7777872-70338-275549495189120/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=10edb31dfbca94f943eb45361d83d805daa0e00e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:43 localhost python3[40404]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:54:44 localhost python3[40467]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:54:44 localhost python3[40510]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662084.2868075-70928-241374715003322/source _original_basename=tmprees3h1u follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:54:50 localhost python3[40540]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 02:54:51 localhost python3[40601]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:54:56 localhost python3[40618]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:01 localhost python3[40635]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:02 localhost python3[40658]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:06 localhost python3[40675]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:07 localhost python3[40698]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:11 localhost python3[40715]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:16 localhost python3[40732]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:16 localhost systemd[35843]: Starting Mark boot as successful... Dec 2 02:55:16 localhost systemd[35843]: Finished Mark boot as successful. Dec 2 02:55:17 localhost python3[40756]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:22 localhost python3[40773]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:26 localhost python3[40790]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:26 localhost python3[40813]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:31 localhost python3[40830]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:35 localhost python3[40847]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:36 localhost python3[40870]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:40 localhost python3[40887]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:55:45 localhost python3[40966]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:46 localhost python3[41014]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:46 localhost python3[41032]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpel0hwj_s recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:47 localhost python3[41062]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:47 localhost python3[41125]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:47 localhost python3[41143]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:48 localhost python3[41205]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:48 localhost python3[41223]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:49 localhost python3[41285]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:49 localhost python3[41303]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:50 localhost python3[41365]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:50 localhost python3[41383]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:50 localhost python3[41445]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:50 localhost python3[41463]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:51 localhost python3[41525]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:51 localhost python3[41543]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:52 localhost python3[41605]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:52 localhost python3[41623]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:53 localhost python3[41685]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:53 localhost python3[41703]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:53 localhost python3[41765]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:53 localhost python3[41783]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:54 localhost python3[41845]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:54 localhost python3[41863]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:55 localhost python3[41925]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:55 localhost python3[41943]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:55 localhost python3[41973]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:55:56 localhost python3[42021]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:55:56 localhost python3[42039]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpnai8sik4 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:55:59 localhost python3[42069]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:04 localhost python3[42086]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost python3[42104]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost python3[42122]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:56:05 localhost systemd[1]: Reloading. Dec 2 02:56:05 localhost systemd-rc-local-generator[42145]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:56:05 localhost systemd-sysv-generator[42150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:56:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:56:06 localhost systemd[1]: Starting Netfilter Tables... Dec 2 02:56:06 localhost systemd[1]: Finished Netfilter Tables. Dec 2 02:56:06 localhost python3[42211]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:07 localhost python3[42254]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662166.5338497-73746-239159711976096/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:07 localhost python3[42284]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:08 localhost python3[42302]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:08 localhost python3[42351]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:08 localhost python3[42394]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662168.185676-73982-262951498292517/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:09 localhost python3[42456]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:09 localhost python3[42499]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662169.0877914-74135-28281127510382/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:10 localhost python3[42561]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:10 localhost python3[42604]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662170.1694684-74200-173283403494308/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:11 localhost python3[42666]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:11 localhost python3[42709]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662171.1695676-74264-270578668711725/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:12 localhost python3[42771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:13 localhost python3[42814]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662172.1352103-74315-62201974635096/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:13 localhost python3[42844]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:14 localhost python3[42909]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:14 localhost python3[42926]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:15 localhost python3[42943]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:15 localhost python3[42962]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:15 localhost python3[42978]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:16 localhost python3[42994]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:16 localhost python3[43010]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 02:56:17 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=7 res=1 Dec 2 02:56:18 localhost python3[43030]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:18 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 2 02:56:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:19 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=8 res=1 Dec 2 02:56:19 localhost python3[43051]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:20 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 2 02:56:20 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:20 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:20 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:20 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:20 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:20 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:20 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:20 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=9 res=1 Dec 2 02:56:20 localhost python3[43072]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:56:21 localhost kernel: SELinux: Converting 2704 SID table entries... Dec 2 02:56:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:56:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:56:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:56:21 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=10 res=1 Dec 2 02:56:21 localhost python3[43093]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost python3[43109]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost python3[43125]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:22 localhost python3[43141]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:56:23 localhost python3[43157]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:24 localhost python3[43174]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:28 localhost python3[43191]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:28 localhost python3[43239]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:28 localhost python3[43282]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662188.2161925-75162-51963711957140/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:29 localhost python3[43312]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:56:29 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 02:56:29 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 02:56:29 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 02:56:29 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 02:56:29 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 2 02:56:29 localhost kernel: Bridge firewalling registered Dec 2 02:56:29 localhost systemd-modules-load[43315]: Inserted module 'br_netfilter' Dec 2 02:56:29 localhost systemd-modules-load[43315]: Module 'msr' is built in Dec 2 02:56:29 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 02:56:30 localhost python3[43366]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:30 localhost python3[43409]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662189.6984463-75228-114654614750421/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:30 localhost python3[43439]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43456]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43474]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:31 localhost python3[43492]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:32 localhost python3[43509]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:32 localhost python3[43526]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:32 localhost python3[43543]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43561]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43579]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:33 localhost python3[43597]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43615]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43633]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43651]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:34 localhost python3[43669]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43686]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43703]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:35 localhost python3[43720]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:36 localhost python3[43737]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 2 02:56:36 localhost python3[43755]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 02:56:36 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 02:56:36 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 02:56:36 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 2 02:56:36 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 02:56:36 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 02:56:36 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 02:56:37 localhost python3[43775]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:37 localhost python3[43791]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:37 localhost python3[43807]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:38 localhost python3[43823]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:56:38 localhost python3[43839]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:38 localhost python3[43855]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:39 localhost python3[43871]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:39 localhost python3[43887]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:39 localhost python3[43903]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:40 localhost python3[43951]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:40 localhost python3[43994]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662200.0371897-75713-15013321711854/source _original_basename=tmprx53x7m_ follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:41 localhost python3[44024]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:42 localhost python3[44041]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:43 localhost python3[44089]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:43 localhost python3[44132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662202.987473-75875-263441585664043/source _original_basename=tmpaxtbu9uv follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:44 localhost python3[44162]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:44 localhost python3[44178]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:44 localhost python3[44194]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44210]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44226]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:45 localhost python3[44242]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:46 localhost python3[44258]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:46 localhost python3[44274]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:46 localhost python3[44290]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:47 localhost python3[44306]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Dec 2 02:56:48 localhost python3[44378]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 2 02:56:48 localhost python3[44432]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Dec 2 02:56:48 localhost python3[44478]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:49 localhost python3[44542]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:49 localhost python3[44587]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662209.1391532-76182-222739817819908/source _original_basename=tmpoc3bs4fq follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:50 localhost python3[44617]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 2 02:56:51 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=11 res=1 Dec 2 02:56:51 localhost python3[44641]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:51 localhost python3[44657]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:56:52 localhost python3[44673]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Dec 2 02:56:53 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=12 res=1 Dec 2 02:56:53 localhost python3[44693]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:56:57 localhost python3[44710]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 02:56:57 localhost python3[44771]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:58 localhost python3[44787]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:56:58 localhost python3[44846]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:56:59 localhost python3[44889]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662218.2124321-76572-115944801792356/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=c2417559b11b6be9524eb43292c609dbba924ea1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:56:59 localhost python3[44951]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:00 localhost python3[44996]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662219.2084208-76682-13346576187072/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:00 localhost python3[45026]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:00 localhost python3[45042]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:00 localhost python3[45058]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:01 localhost python3[45074]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:01 localhost python3[45122]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:02 localhost python3[45165]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662221.6137989-76793-260819696414042/source _original_basename=tmp6ut5pp_4 follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:02 localhost python3[45195]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:03 localhost python3[45211]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:03 localhost python3[45227]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:57:07 localhost python3[45276]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:07 localhost python3[45321]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662227.1881635-77016-264859561248747/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:08 localhost python3[45352]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:08 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 02:57:08 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 02:57:08 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 02:57:08 localhost systemd[1]: sshd.service: Consumed 2.042s CPU time, read 2.1M from disk, written 8.0K to disk. Dec 2 02:57:08 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 02:57:08 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 02:57:08 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:08 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:08 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 02:57:08 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 02:57:08 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 02:57:08 localhost sshd[45356]: main: sshd: ssh-rsa algorithm is disabled Dec 2 02:57:08 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 02:57:08 localhost python3[45372]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:09 localhost python3[45390]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:10 localhost python3[45408]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 14.69 MB, 0.02 MB/s#012Interval WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 7.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 02:57:14 localhost python3[45457]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:14 localhost python3[45475]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 02:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 15.29 MB, 0.03 MB/s#012Interval WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 02:57:15 localhost python3[45505]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:15 localhost python3[45555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:16 localhost python3[45573]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:16 localhost python3[45603]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 02:57:16 localhost systemd[1]: Reloading. Dec 2 02:57:16 localhost systemd-rc-local-generator[45624]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:57:16 localhost systemd-sysv-generator[45628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:57:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:57:16 localhost systemd[1]: Starting chronyd online sources service... Dec 2 02:57:16 localhost chronyc[45642]: 200 OK Dec 2 02:57:16 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 2 02:57:16 localhost systemd[1]: Finished chronyd online sources service. Dec 2 02:57:17 localhost python3[45658]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:17 localhost chronyd[25712]: System clock was stepped by -0.000069 seconds Dec 2 02:57:17 localhost python3[45675]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:18 localhost python3[45692]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:18 localhost chronyd[25712]: System clock was stepped by 0.000000 seconds Dec 2 02:57:18 localhost python3[45709]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:18 localhost python3[45726]: ansible-timezone Invoked with name=UTC hwclock=None Dec 2 02:57:19 localhost systemd[1]: Starting Time & Date Service... Dec 2 02:57:19 localhost systemd[1]: Started Time & Date Service. Dec 2 02:57:21 localhost python3[45746]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:21 localhost python3[45763]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:22 localhost python3[45780]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 2 02:57:22 localhost python3[45796]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:57:23 localhost python3[45812]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:23 localhost python3[45828]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:23 localhost python3[45876]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:24 localhost python3[45919]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662243.5780742-78161-8054668052950/source _original_basename=tmp24xzf3zx follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:24 localhost python3[45981]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:25 localhost python3[46024]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662244.4650989-78213-258433896603807/source _original_basename=tmpakzmg5ne follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:25 localhost python3[46054]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 02:57:25 localhost systemd[1]: Reloading. Dec 2 02:57:25 localhost systemd-sysv-generator[46087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:57:25 localhost systemd-rc-local-generator[46080]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:57:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:57:26 localhost python3[46108]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:26 localhost python3[46124]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:26 localhost systemd[35843]: Created slice User Background Tasks Slice. Dec 2 02:57:26 localhost systemd[35843]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 02:57:26 localhost systemd[35843]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 02:57:27 localhost python3[46142]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:57:27 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Dec 2 02:57:27 localhost python3[46159]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:27 localhost python3[46175]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:28 localhost python3[46223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:57:28 localhost python3[46266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662247.886411-78380-122593191027078/source _original_basename=tmp9y49mo2w follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:49 localhost python3[46326]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:50 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 02:57:50 localhost python3[46359]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Dec 2 02:57:50 localhost python3[46391]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:50 localhost python3[46407]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:51 localhost python3[46423]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:57:51 localhost python3[46439]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 2 02:57:52 localhost kernel: SELinux: Converting 2707 SID table entries... Dec 2 02:57:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:57:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:57:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:57:52 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=13 res=1 Dec 2 02:57:52 localhost python3[46475]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 02:57:54 localhost python3[46612]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Dec 2 02:57:54 localhost rsyslogd[754]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Dec 2 02:57:55 localhost python3[46628]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:55 localhost python3[46644]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:57:56 localhost python3[46660]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Dec 2 02:58:00 localhost python3[46708]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:58:01 localhost python3[46751]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662280.4645784-79786-116757097494743/source _original_basename=tmpgkd2kwug follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 02:58:01 localhost python3[46781]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:58:03 localhost python3[46904]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:58:04 localhost python3[47025]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 02:58:07 localhost python3[47041]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:58:08 localhost python3[47058]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 02:58:12 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost dbus-broker-launch[18431]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost dbus-broker-launch[18431]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 2 02:58:12 localhost dbus-broker-launch[18431]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 2 02:58:12 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:58:12 localhost systemd[1]: Reexecuting. Dec 2 02:58:12 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 2 02:58:12 localhost systemd[1]: Detected virtualization kvm. Dec 2 02:58:12 localhost systemd[1]: Detected architecture x86-64. Dec 2 02:58:12 localhost systemd-rc-local-generator[47108]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:12 localhost systemd-sysv-generator[47114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:21 localhost kernel: SELinux: Converting 2707 SID table entries... Dec 2 02:58:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 02:58:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 02:58:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 02:58:21 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:58:21 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=14 res=1 Dec 2 02:58:21 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 02:58:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:58:22 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 02:58:22 localhost systemd[1]: Reloading. Dec 2 02:58:22 localhost systemd-rc-local-generator[47202]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:22 localhost systemd-sysv-generator[47206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 02:58:22 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:58:22 localhost systemd[1]: Stopping Journal Service... Dec 2 02:58:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 2 02:58:22 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd). Dec 2 02:58:22 localhost systemd-journald[619]: Journal stopped Dec 2 02:58:22 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 2 02:58:22 localhost systemd[1]: Stopped Journal Service. Dec 2 02:58:22 localhost systemd[1]: systemd-journald.service: Consumed 2.394s CPU time. Dec 2 02:58:22 localhost systemd[1]: Starting Journal Service... Dec 2 02:58:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 2 02:58:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 2 02:58:22 localhost systemd[1]: systemd-udevd.service: Consumed 2.796s CPU time. Dec 2 02:58:22 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 2 02:58:22 localhost systemd-journald[47611]: Journal started Dec 2 02:58:22 localhost systemd-journald[47611]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 12.1M, max 314.7M, 302.6M free. Dec 2 02:58:22 localhost systemd[1]: Started Journal Service. Dec 2 02:58:22 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 2 02:58:22 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 02:58:22 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:58:22 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 02:58:22 localhost systemd-udevd[47615]: Using default interface naming scheme 'rhel-9.0'. Dec 2 02:58:22 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 2 02:58:22 localhost systemd[1]: Reloading. Dec 2 02:58:23 localhost systemd-rc-local-generator[48125]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 02:58:23 localhost systemd-sysv-generator[48129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 02:58:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 02:58:23 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 02:58:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 02:58:23 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 02:58:23 localhost systemd[1]: man-db-cache-update.service: Consumed 1.311s CPU time. Dec 2 02:58:23 localhost systemd[1]: run-r6161ad4a52914a2aa570c55418ab4a33.service: Deactivated successfully. Dec 2 02:58:23 localhost systemd[1]: run-r2b8b533960ea431da1e01c2576fcd26a.service: Deactivated successfully. Dec 2 02:58:25 localhost python3[48553]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Dec 2 02:58:25 localhost python3[48572]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 02:58:26 localhost python3[48590]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:26 localhost python3[48590]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Dec 2 02:58:26 localhost python3[48590]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Dec 2 02:58:34 localhost podman[48602]: 2025-12-02 07:58:26.65340634 +0000 UTC m=+0.050326178 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 02:58:34 localhost python3[48590]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Dec 2 02:58:34 localhost python3[48703]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:34 localhost python3[48703]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Dec 2 02:58:34 localhost python3[48703]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Dec 2 02:58:42 localhost podman[48715]: 2025-12-02 07:58:35.006137911 +0000 UTC m=+0.044771792 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 02:58:42 localhost python3[48703]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Dec 2 02:58:43 localhost python3[48817]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:58:43 localhost python3[48817]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Dec 2 02:58:43 localhost python3[48817]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Dec 2 02:59:01 localhost podman[48831]: 2025-12-02 07:58:43.204949959 +0000 UTC m=+0.047198135 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 02:59:01 localhost python3[48817]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Dec 2 02:59:01 localhost systemd[1]: tmp-crun.xYTdwa.mount: Deactivated successfully. Dec 2 02:59:01 localhost podman[49693]: 2025-12-02 07:59:01.570018219 +0000 UTC m=+0.115813341 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, release=1763362218, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Dec 2 02:59:01 localhost python3[49702]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:01 localhost python3[49702]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Dec 2 02:59:01 localhost python3[49702]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Dec 2 02:59:01 localhost podman[49693]: 2025-12-02 07:59:01.718142287 +0000 UTC m=+0.263937319 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, name=rhceph, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Dec 2 02:59:17 localhost podman[49729]: 2025-12-02 07:59:01.717270395 +0000 UTC m=+0.046088347 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 02:59:17 localhost python3[49702]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Dec 2 02:59:17 localhost python3[49931]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:17 localhost python3[49931]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Dec 2 02:59:17 localhost python3[49931]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Dec 2 02:59:24 localhost podman[49945]: 2025-12-02 07:59:17.636254064 +0000 UTC m=+0.041411249 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 02:59:24 localhost python3[49931]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Dec 2 02:59:24 localhost python3[50287]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:24 localhost python3[50287]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Dec 2 02:59:24 localhost python3[50287]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Dec 2 02:59:29 localhost podman[50299]: 2025-12-02 07:59:24.531149034 +0000 UTC m=+0.041659215 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 02:59:29 localhost python3[50287]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Dec 2 02:59:29 localhost python3[50378]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:29 localhost python3[50378]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Dec 2 02:59:29 localhost python3[50378]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Dec 2 02:59:31 localhost podman[50390]: 2025-12-02 07:59:29.831405019 +0000 UTC m=+0.038341620 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 02:59:31 localhost python3[50378]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Dec 2 02:59:32 localhost python3[50468]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:32 localhost python3[50468]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Dec 2 02:59:32 localhost python3[50468]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Dec 2 02:59:34 localhost podman[50481]: 2025-12-02 07:59:32.280853304 +0000 UTC m=+0.050851388 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 02:59:34 localhost python3[50468]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Dec 2 02:59:35 localhost python3[50559]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:35 localhost python3[50559]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Dec 2 02:59:35 localhost python3[50559]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Dec 2 02:59:37 localhost podman[50571]: 2025-12-02 07:59:35.293712433 +0000 UTC m=+0.045543423 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 02:59:37 localhost python3[50559]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Dec 2 02:59:37 localhost python3[50649]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:37 localhost python3[50649]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Dec 2 02:59:38 localhost python3[50649]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Dec 2 02:59:41 localhost podman[50661]: 2025-12-02 07:59:38.058394633 +0000 UTC m=+0.032357729 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 02:59:41 localhost python3[50649]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Dec 2 02:59:42 localhost python3[50750]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 2 02:59:42 localhost python3[50750]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Dec 2 02:59:42 localhost python3[50750]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Dec 2 02:59:44 localhost podman[50764]: 2025-12-02 07:59:42.318556666 +0000 UTC m=+0.046503699 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 02:59:44 localhost python3[50750]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Dec 2 02:59:45 localhost python3[50840]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:59:47 localhost ansible-async_wrapper.py[51012]: Invoked with 241885978677 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662386.7326825-82665-83397263209155/AnsiballZ_command.py _ Dec 2 02:59:47 localhost ansible-async_wrapper.py[51015]: Starting module and watcher Dec 2 02:59:47 localhost ansible-async_wrapper.py[51015]: Start watching 51016 (3600) Dec 2 02:59:47 localhost ansible-async_wrapper.py[51016]: Start module (51016) Dec 2 02:59:47 localhost ansible-async_wrapper.py[51012]: Return async_wrapper task started. Dec 2 02:59:47 localhost python3[51036]: ansible-ansible.legacy.async_status Invoked with jid=241885978677.51012 mode=status _async_dir=/tmp/.ansible_async Dec 2 02:59:51 localhost puppet-user[51035]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 02:59:51 localhost puppet-user[51035]: (file: /etc/puppet/hiera.yaml) Dec 2 02:59:51 localhost puppet-user[51035]: Warning: Undefined variable '::deploy_config_name'; Dec 2 02:59:51 localhost puppet-user[51035]: (file & line not available) Dec 2 02:59:51 localhost puppet-user[51035]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 02:59:51 localhost puppet-user[51035]: (file & line not available) Dec 2 02:59:51 localhost puppet-user[51035]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 02:59:51 localhost puppet-user[51035]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 02:59:51 localhost puppet-user[51035]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.12 seconds Dec 2 02:59:51 localhost puppet-user[51035]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Dec 2 02:59:51 localhost puppet-user[51035]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Dec 2 02:59:51 localhost puppet-user[51035]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Dec 2 02:59:51 localhost puppet-user[51035]: Notice: Applied catalog in 0.05 seconds Dec 2 02:59:51 localhost puppet-user[51035]: Application: Dec 2 02:59:51 localhost puppet-user[51035]: Initial environment: production Dec 2 02:59:51 localhost puppet-user[51035]: Converged environment: production Dec 2 02:59:51 localhost puppet-user[51035]: Run mode: user Dec 2 02:59:51 localhost puppet-user[51035]: Changes: Dec 2 02:59:51 localhost puppet-user[51035]: Total: 3 Dec 2 02:59:51 localhost puppet-user[51035]: Events: Dec 2 02:59:51 localhost puppet-user[51035]: Success: 3 Dec 2 02:59:51 localhost puppet-user[51035]: Total: 3 Dec 2 02:59:51 localhost puppet-user[51035]: Resources: Dec 2 02:59:51 localhost puppet-user[51035]: Changed: 3 Dec 2 02:59:51 localhost puppet-user[51035]: Out of sync: 3 Dec 2 02:59:51 localhost puppet-user[51035]: Total: 10 Dec 2 02:59:51 localhost puppet-user[51035]: Time: Dec 2 02:59:51 localhost puppet-user[51035]: Schedule: 0.00 Dec 2 02:59:51 localhost puppet-user[51035]: File: 0.00 Dec 2 02:59:51 localhost puppet-user[51035]: Exec: 0.01 Dec 2 02:59:51 localhost puppet-user[51035]: Augeas: 0.02 Dec 2 02:59:51 localhost puppet-user[51035]: Transaction evaluation: 0.05 Dec 2 02:59:51 localhost puppet-user[51035]: Catalog application: 0.05 Dec 2 02:59:51 localhost puppet-user[51035]: Config retrieval: 0.15 Dec 2 02:59:51 localhost puppet-user[51035]: Last run: 1764662391 Dec 2 02:59:51 localhost puppet-user[51035]: Filebucket: 0.00 Dec 2 02:59:51 localhost puppet-user[51035]: Total: 0.05 Dec 2 02:59:51 localhost puppet-user[51035]: Version: Dec 2 02:59:51 localhost puppet-user[51035]: Config: 1764662391 Dec 2 02:59:51 localhost puppet-user[51035]: Puppet: 7.10.0 Dec 2 02:59:51 localhost ansible-async_wrapper.py[51016]: Module complete (51016) Dec 2 02:59:52 localhost ansible-async_wrapper.py[51015]: Done in kid B. Dec 2 02:59:57 localhost python3[51266]: ansible-ansible.legacy.async_status Invoked with jid=241885978677.51012 mode=status _async_dir=/tmp/.ansible_async Dec 2 02:59:58 localhost python3[51282]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:59:58 localhost python3[51298]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 02:59:59 localhost python3[51346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 02:59:59 localhost python3[51389]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662398.962913-82874-175783869190601/source _original_basename=tmpnlkqsi8w follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 02:59:59 localhost python3[51419]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:01 localhost python3[51522]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:00:01 localhost python3[51541]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 03:00:01 localhost python3[51557]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005541913 step=1 update_config_hash_only=False Dec 2 03:00:02 localhost python3[51573]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:03 localhost python3[51589]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:00:03 localhost python3[51635]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 03:00:04 localhost python3[51707]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:00:05 localhost podman[51896]: 2025-12-02 08:00:05.078080846 +0000 UTC m=+0.102450652 container create 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public) Dec 2 03:00:05 localhost podman[51884]: 2025-12-02 08:00:04.991572711 +0000 UTC m=+0.026811472 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:00:05 localhost podman[51896]: 2025-12-02 08:00:05.007430247 +0000 UTC m=+0.031800023 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:00:05 localhost podman[51914]: 2025-12-02 08:00:05.112350694 +0000 UTC m=+0.118765171 container create 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-nova_libvirt, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16.scope. Dec 2 03:00:05 localhost podman[51914]: 2025-12-02 08:00:05.020829178 +0000 UTC m=+0.027243675 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:00:05 localhost podman[51905]: 2025-12-02 08:00:05.131386272 +0000 UTC m=+0.146871276 container create 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=container-puppet-crond, architecture=x86_64, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:00:05 localhost podman[51906]: 2025-12-02 08:00:05.14049244 +0000 UTC m=+0.150110780 container create 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=container-puppet-metrics_qdr, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d.scope. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0052f13d91303294194500e25d2f8e0888afaf1ca7e6de5d98fbefe304631472/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0052f13d91303294194500e25d2f8e0888afaf1ca7e6de5d98fbefe304631472/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf.scope. Dec 2 03:00:05 localhost podman[51905]: 2025-12-02 08:00:05.061505982 +0000 UTC m=+0.076991016 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60.scope. Dec 2 03:00:05 localhost podman[51896]: 2025-12-02 08:00:05.164284964 +0000 UTC m=+0.188654770 container init 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-iscsid-container) Dec 2 03:00:05 localhost podman[51906]: 2025-12-02 08:00:05.063089424 +0000 UTC m=+0.072707764 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b388412fca905b307e07ab1555f64621018b9abe733ff2c7e7266decb6c12c8d/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost podman[51896]: 2025-12-02 08:00:05.177598733 +0000 UTC m=+0.201968529 container start 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_id=tripleo_puppet_step1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=, container_name=container-puppet-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:05 localhost podman[51896]: 2025-12-02 08:00:05.178439574 +0000 UTC m=+0.202809380 container attach 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:00:05 localhost podman[51884]: 2025-12-02 08:00:05.182195752 +0000 UTC m=+0.217434493 container create 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=container-puppet-collectd, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1605e3642cbc6f4a340468563ba343adf6d0f8a3115728727d8e4543418cb20/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/104925f4f3140d86c4d76991cbbe20b0ea2114e629deebdf08f0de90504ded5f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:05 localhost systemd[1]: Started libpod-conmon-486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1.scope. Dec 2 03:00:05 localhost systemd[1]: Started libcrun container. Dec 2 03:00:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b063472ae149eb518ac7d99c3a97d11dcdfc09eaeb34ff91e9c6e02d02ccc47e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:06 localhost podman[51905]: 2025-12-02 08:00:06.458215921 +0000 UTC m=+1.473700945 container init 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, summary=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:00:06 localhost podman[51905]: 2025-12-02 08:00:06.469845645 +0000 UTC m=+1.485330719 container start 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, com.redhat.component=openstack-cron-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team) Dec 2 03:00:06 localhost podman[51905]: 2025-12-02 08:00:06.470211944 +0000 UTC m=+1.485697018 container attach 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, container_name=container-puppet-crond, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1) Dec 2 03:00:06 localhost podman[51884]: 2025-12-02 08:00:06.502589132 +0000 UTC m=+1.537827893 container init 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_puppet_step1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:00:06 localhost systemd[1]: tmp-crun.Mw5HZ6.mount: Deactivated successfully. Dec 2 03:00:06 localhost podman[51884]: 2025-12-02 08:00:06.521736894 +0000 UTC m=+1.556975655 container start 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:00:06 localhost podman[51884]: 2025-12-02 08:00:06.522064332 +0000 UTC m=+1.557303093 container attach 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vcs-type=git, config_id=tripleo_puppet_step1, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, release=1761123044, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:00:06 localhost podman[51906]: 2025-12-02 08:00:06.571025124 +0000 UTC m=+1.580643454 container init 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, url=https://www.redhat.com, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:06 localhost podman[51906]: 2025-12-02 08:00:06.641115159 +0000 UTC m=+1.650733529 container start 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, distribution-scope=public, io.openshift.expose-services=) Dec 2 03:00:06 localhost podman[51906]: 2025-12-02 08:00:06.641469918 +0000 UTC m=+1.651088278 container attach 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_puppet_step1, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:00:06 localhost podman[51914]: 2025-12-02 08:00:06.654804137 +0000 UTC m=+1.661218634 container init 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=container-puppet-nova_libvirt, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:00:06 localhost podman[51914]: 2025-12-02 08:00:06.661498262 +0000 UTC m=+1.667912749 container start 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:06 localhost podman[51914]: 2025-12-02 08:00:06.661728249 +0000 UTC m=+1.668142756 container attach 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, tcib_managed=true, container_name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 2 03:00:08 localhost puppet-user[52021]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[52021]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[52021]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[52021]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52002]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[52002]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[52002]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[52002]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52021]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[52021]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52042]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[52042]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[52042]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[52042]: (file & line not available) Dec 2 03:00:08 localhost ovs-vsctl[52352]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 2 03:00:08 localhost puppet-user[52049]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[52049]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[52049]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[52049]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52002]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[52002]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52021]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.08 seconds Dec 2 03:00:08 localhost puppet-user[52042]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[52042]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52049]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[52049]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52021]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Dec 2 03:00:08 localhost puppet-user[52021]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Dec 2 03:00:08 localhost puppet-user[52002]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.10 seconds Dec 2 03:00:08 localhost puppet-user[52049]: Notice: Accepting previously invalid value for target type 'Integer' Dec 2 03:00:08 localhost puppet-user[52021]: Notice: Applied catalog in 0.04 seconds Dec 2 03:00:08 localhost puppet-user[52021]: Application: Dec 2 03:00:08 localhost puppet-user[52021]: Initial environment: production Dec 2 03:00:08 localhost puppet-user[52021]: Converged environment: production Dec 2 03:00:08 localhost puppet-user[52021]: Run mode: user Dec 2 03:00:08 localhost puppet-user[52021]: Changes: Dec 2 03:00:08 localhost puppet-user[52021]: Total: 2 Dec 2 03:00:08 localhost puppet-user[52021]: Events: Dec 2 03:00:08 localhost puppet-user[52021]: Success: 2 Dec 2 03:00:08 localhost puppet-user[52021]: Total: 2 Dec 2 03:00:08 localhost puppet-user[52021]: Resources: Dec 2 03:00:08 localhost puppet-user[52021]: Changed: 2 Dec 2 03:00:08 localhost puppet-user[52021]: Out of sync: 2 Dec 2 03:00:08 localhost puppet-user[52021]: Skipped: 7 Dec 2 03:00:08 localhost puppet-user[52021]: Total: 9 Dec 2 03:00:08 localhost puppet-user[52021]: Time: Dec 2 03:00:08 localhost puppet-user[52021]: File: 0.01 Dec 2 03:00:08 localhost puppet-user[52021]: Cron: 0.01 Dec 2 03:00:08 localhost puppet-user[52021]: Transaction evaluation: 0.04 Dec 2 03:00:08 localhost puppet-user[52021]: Catalog application: 0.04 Dec 2 03:00:08 localhost puppet-user[52021]: Config retrieval: 0.11 Dec 2 03:00:08 localhost puppet-user[52021]: Last run: 1764662408 Dec 2 03:00:08 localhost puppet-user[52021]: Total: 0.05 Dec 2 03:00:08 localhost puppet-user[52021]: Version: Dec 2 03:00:08 localhost puppet-user[52021]: Config: 1764662408 Dec 2 03:00:08 localhost puppet-user[52021]: Puppet: 7.10.0 Dec 2 03:00:08 localhost puppet-user[52049]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.13 seconds Dec 2 03:00:08 localhost puppet-user[52002]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Dec 2 03:00:08 localhost puppet-user[52002]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Dec 2 03:00:08 localhost puppet-user[52002]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}ce90bebf2484546c06edc1852bcd172057e5aa8cc85a9be28cc54d45adc16782' Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Dec 2 03:00:08 localhost puppet-user[52049]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Dec 2 03:00:08 localhost puppet-user[52049]: Notice: Applied catalog in 0.03 seconds Dec 2 03:00:08 localhost puppet-user[52049]: Application: Dec 2 03:00:08 localhost puppet-user[52049]: Initial environment: production Dec 2 03:00:08 localhost puppet-user[52049]: Converged environment: production Dec 2 03:00:08 localhost puppet-user[52049]: Run mode: user Dec 2 03:00:08 localhost puppet-user[52049]: Changes: Dec 2 03:00:08 localhost puppet-user[52049]: Total: 7 Dec 2 03:00:08 localhost puppet-user[52049]: Events: Dec 2 03:00:08 localhost puppet-user[52049]: Success: 7 Dec 2 03:00:08 localhost puppet-user[52049]: Total: 7 Dec 2 03:00:08 localhost puppet-user[52049]: Resources: Dec 2 03:00:08 localhost puppet-user[52049]: Skipped: 13 Dec 2 03:00:08 localhost puppet-user[52049]: Changed: 5 Dec 2 03:00:08 localhost puppet-user[52049]: Out of sync: 5 Dec 2 03:00:08 localhost puppet-user[52049]: Total: 20 Dec 2 03:00:08 localhost puppet-user[52049]: Time: Dec 2 03:00:08 localhost puppet-user[52049]: File: 0.01 Dec 2 03:00:08 localhost puppet-user[52049]: Transaction evaluation: 0.03 Dec 2 03:00:08 localhost puppet-user[52049]: Catalog application: 0.03 Dec 2 03:00:08 localhost puppet-user[52049]: Config retrieval: 0.16 Dec 2 03:00:08 localhost puppet-user[52049]: Last run: 1764662408 Dec 2 03:00:08 localhost puppet-user[52049]: Total: 0.03 Dec 2 03:00:08 localhost puppet-user[52049]: Version: Dec 2 03:00:08 localhost puppet-user[52049]: Config: 1764662408 Dec 2 03:00:08 localhost puppet-user[52049]: Puppet: 7.10.0 Dec 2 03:00:08 localhost puppet-user[52063]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:08 localhost puppet-user[52063]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:08 localhost puppet-user[52063]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52063]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:08 localhost puppet-user[52063]: (file & line not available) Dec 2 03:00:08 localhost puppet-user[52042]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.35 seconds Dec 2 03:00:08 localhost systemd[1]: libpod-028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf.scope: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: libpod-028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf.scope: Consumed 2.149s CPU time. Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Dec 2 03:00:08 localhost puppet-user[52063]: in a future release. Use nova::cinder::os_region_name instead Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Dec 2 03:00:08 localhost puppet-user[52063]: in a future release. Use nova::cinder::catalog_info instead Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Dec 2 03:00:08 localhost systemd[1]: libpod-8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60.scope: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: libpod-8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60.scope: Consumed 2.108s CPU time. Dec 2 03:00:08 localhost podman[51906]: 2025-12-02 08:00:08.827666615 +0000 UTC m=+3.837284975 container died 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=container-puppet-metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:00:08 localhost podman[52477]: 2025-12-02 08:00:08.84692644 +0000 UTC m=+0.077683755 container died 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, container_name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Dec 2 03:00:08 localhost systemd[1]: tmp-crun.PvtXeP.mount: Deactivated successfully. Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Dec 2 03:00:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: var-lib-containers-storage-overlay-b388412fca905b307e07ab1555f64621018b9abe733ff2c7e7266decb6c12c8d-merged.mount: Deactivated successfully. Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Dec 2 03:00:08 localhost puppet-user[52002]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Dec 2 03:00:08 localhost puppet-user[52002]: Notice: Applied catalog in 0.49 seconds Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Dec 2 03:00:08 localhost puppet-user[52002]: Application: Dec 2 03:00:08 localhost puppet-user[52002]: Initial environment: production Dec 2 03:00:08 localhost puppet-user[52002]: Converged environment: production Dec 2 03:00:08 localhost puppet-user[52002]: Run mode: user Dec 2 03:00:08 localhost puppet-user[52002]: Changes: Dec 2 03:00:08 localhost puppet-user[52002]: Total: 4 Dec 2 03:00:08 localhost puppet-user[52002]: Events: Dec 2 03:00:08 localhost puppet-user[52002]: Success: 4 Dec 2 03:00:08 localhost puppet-user[52002]: Total: 4 Dec 2 03:00:08 localhost puppet-user[52002]: Resources: Dec 2 03:00:08 localhost puppet-user[52002]: Changed: 4 Dec 2 03:00:08 localhost puppet-user[52002]: Out of sync: 4 Dec 2 03:00:08 localhost puppet-user[52002]: Skipped: 8 Dec 2 03:00:08 localhost puppet-user[52002]: Total: 13 Dec 2 03:00:08 localhost puppet-user[52002]: Time: Dec 2 03:00:08 localhost puppet-user[52002]: File: 0.00 Dec 2 03:00:08 localhost puppet-user[52002]: Exec: 0.05 Dec 2 03:00:08 localhost puppet-user[52002]: Config retrieval: 0.13 Dec 2 03:00:08 localhost puppet-user[52002]: Augeas: 0.41 Dec 2 03:00:08 localhost puppet-user[52002]: Transaction evaluation: 0.47 Dec 2 03:00:08 localhost puppet-user[52002]: Catalog application: 0.49 Dec 2 03:00:08 localhost puppet-user[52002]: Last run: 1764662408 Dec 2 03:00:08 localhost puppet-user[52002]: Total: 0.49 Dec 2 03:00:08 localhost puppet-user[52002]: Version: Dec 2 03:00:08 localhost puppet-user[52002]: Config: 1764662408 Dec 2 03:00:08 localhost puppet-user[52002]: Puppet: 7.10.0 Dec 2 03:00:08 localhost podman[52477]: 2025-12-02 08:00:08.92182448 +0000 UTC m=+0.152581755 container cleanup 028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, container_name=container-puppet-crond, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}0d4e701b7b2398bbf396579a0713d46d3c496c79edc52f2e260456f359c9a46c' Dec 2 03:00:08 localhost podman[52500]: 2025-12-02 08:00:08.939791851 +0000 UTC m=+0.099522607 container cleanup 8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=container-puppet-metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:08 localhost systemd[1]: libpod-conmon-028da319cb862756f8deb29d192de17dcf54c8ccbdc71829a48e764c124ec0bf.scope: Deactivated successfully. Dec 2 03:00:08 localhost systemd[1]: libpod-conmon-8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60.scope: Deactivated successfully. Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Dec 2 03:00:08 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:00:08 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Dec 2 03:00:08 localhost puppet-user[52063]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Dec 2 03:00:08 localhost puppet-user[52042]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Dec 2 03:00:09 localhost puppet-user[52042]: Notice: Applied catalog in 0.29 seconds Dec 2 03:00:09 localhost puppet-user[52042]: Application: Dec 2 03:00:09 localhost puppet-user[52042]: Initial environment: production Dec 2 03:00:09 localhost puppet-user[52042]: Converged environment: production Dec 2 03:00:09 localhost puppet-user[52042]: Run mode: user Dec 2 03:00:09 localhost puppet-user[52042]: Changes: Dec 2 03:00:09 localhost puppet-user[52042]: Total: 43 Dec 2 03:00:09 localhost puppet-user[52042]: Events: Dec 2 03:00:09 localhost puppet-user[52042]: Success: 43 Dec 2 03:00:09 localhost puppet-user[52042]: Total: 43 Dec 2 03:00:09 localhost puppet-user[52042]: Resources: Dec 2 03:00:09 localhost puppet-user[52042]: Skipped: 14 Dec 2 03:00:09 localhost puppet-user[52042]: Changed: 38 Dec 2 03:00:09 localhost puppet-user[52042]: Out of sync: 38 Dec 2 03:00:09 localhost puppet-user[52042]: Total: 82 Dec 2 03:00:09 localhost puppet-user[52042]: Time: Dec 2 03:00:09 localhost puppet-user[52042]: Concat fragment: 0.00 Dec 2 03:00:09 localhost puppet-user[52042]: Concat file: 0.00 Dec 2 03:00:09 localhost puppet-user[52042]: File: 0.13 Dec 2 03:00:09 localhost puppet-user[52042]: Transaction evaluation: 0.28 Dec 2 03:00:09 localhost puppet-user[52042]: Catalog application: 0.29 Dec 2 03:00:09 localhost puppet-user[52042]: Config retrieval: 0.41 Dec 2 03:00:09 localhost puppet-user[52042]: Last run: 1764662409 Dec 2 03:00:09 localhost puppet-user[52042]: Total: 0.29 Dec 2 03:00:09 localhost puppet-user[52042]: Version: Dec 2 03:00:09 localhost puppet-user[52042]: Config: 1764662408 Dec 2 03:00:09 localhost puppet-user[52042]: Puppet: 7.10.0 Dec 2 03:00:09 localhost puppet-user[52063]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Dec 2 03:00:09 localhost puppet-user[52063]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Dec 2 03:00:09 localhost puppet-user[52063]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Dec 2 03:00:09 localhost systemd[1]: libpod-79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16.scope: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: libpod-79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16.scope: Consumed 2.590s CPU time. Dec 2 03:00:09 localhost podman[51896]: 2025-12-02 08:00:09.237343241 +0000 UTC m=+4.261713037 container died 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.expose-services=, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:09 localhost puppet-user[52063]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Dec 2 03:00:09 localhost systemd[1]: libpod-486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1.scope: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: libpod-486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1.scope: Consumed 2.589s CPU time. Dec 2 03:00:09 localhost podman[51884]: 2025-12-02 08:00:09.41950106 +0000 UTC m=+4.454739811 container died 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, container_name=container-puppet-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Dec 2 03:00:09 localhost podman[52651]: 2025-12-02 08:00:09.429887262 +0000 UTC m=+0.197899583 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:00:09 localhost podman[52673]: 2025-12-02 08:00:09.707587452 +0000 UTC m=+0.459112851 container cleanup 79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, container_name=container-puppet-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16.scope: Deactivated successfully. Dec 2 03:00:09 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:00:09 localhost podman[51781]: 2025-12-02 08:00:04.912725687 +0000 UTC m=+0.042479753 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:09 localhost podman[52713]: 2025-12-02 08:00:09.758229968 +0000 UTC m=+0.328564483 container cleanup 486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:00:09 localhost systemd[1]: libpod-conmon-486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1.scope: Deactivated successfully. Dec 2 03:00:09 localhost podman[52651]: 2025-12-02 08:00:09.767724746 +0000 UTC m=+0.535737037 container create f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:09 localhost systemd[1]: Started libpod-conmon-f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a.scope. Dec 2 03:00:09 localhost systemd[1]: Started libcrun container. Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5908dabcdc4beecd14375872c1a5b4a4e28c3db557b9e42f64a01ed422f93ce2/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost podman[52972]: 2025-12-02 08:00:09.807656703 +0000 UTC m=+0.320649427 container create 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Dec 2 03:00:09 localhost podman[52972]: 2025-12-02 08:00:09.721295721 +0000 UTC m=+0.234288445 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:00:09 localhost systemd[1]: Started libpod-conmon-09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d.scope. Dec 2 03:00:09 localhost podman[52651]: 2025-12-02 08:00:09.867204151 +0000 UTC m=+0.635216472 container init f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., architecture=x86_64, container_name=container-puppet-rsyslog, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:49:49Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1) Dec 2 03:00:09 localhost systemd[1]: Started libcrun container. Dec 2 03:00:09 localhost podman[52651]: 2025-12-02 08:00:09.877007998 +0000 UTC m=+0.645020319 container start f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, version=17.1.12, com.redhat.component=openstack-rsyslog-container) Dec 2 03:00:09 localhost puppet-user[52063]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 1.38 seconds Dec 2 03:00:09 localhost podman[52651]: 2025-12-02 08:00:09.882632195 +0000 UTC m=+0.650644516 container attach f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-rsyslog, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc.) Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-b063472ae149eb518ac7d99c3a97d11dcdfc09eaeb34ff91e9c6e02d02ccc47e-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-486d48aed113ca9f72cd2937199dcf86d7354e3bd3109dee0d41df3f96bdd7d1-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-d1605e3642cbc6f4a340468563ba343adf6d0f8a3115728727d8e4543418cb20-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a2f19c4db6822ef526d58aece58b61f7cc1170e0d395c0acd9eda9b1c2e9d60-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-0052f13d91303294194500e25d2f8e0888afaf1ca7e6de5d98fbefe304631472-merged.mount: Deactivated successfully. Dec 2 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79cf949874d5e3aaf0f7dbc88df9f119a232e5884c4318bbe279da712f335e16-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f9ede822be11b60c0a1703a4ec9607dd292d56847ed8465c37bae8fb9e0d08/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f9ede822be11b60c0a1703a4ec9607dd292d56847ed8465c37bae8fb9e0d08/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:09 localhost podman[52972]: 2025-12-02 08:00:09.896945389 +0000 UTC m=+0.409938113 container init 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1) Dec 2 03:00:09 localhost podman[52972]: 2025-12-02 08:00:09.906737536 +0000 UTC m=+0.419730230 container start 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc.) Dec 2 03:00:09 localhost podman[52972]: 2025-12-02 08:00:09.906952512 +0000 UTC m=+0.419945286 container attach 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=container-puppet-ovn_controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:09 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:00:09 localhost podman[53031]: 2025-12-02 08:00:09.962544287 +0000 UTC m=+0.087111271 container create d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, container_name=container-puppet-ceilometer, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-19T00:11:59Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-central-container, distribution-scope=public) Dec 2 03:00:10 localhost systemd[1]: Started libpod-conmon-d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257.scope. Dec 2 03:00:10 localhost podman[53031]: 2025-12-02 08:00:09.920468946 +0000 UTC m=+0.045035990 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:10 localhost systemd[1]: Started libcrun container. Dec 2 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7aea19432089756ed62f0f30cfa5a3f11dba2345bf487cdfbd5c2a4914be89/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:10 localhost podman[53031]: 2025-12-02 08:00:10.04362406 +0000 UTC m=+0.168191044 container init d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:11:59Z, io.buildah.version=1.41.4, tcib_managed=true, container_name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, io.openshift.expose-services=) Dec 2 03:00:10 localhost podman[53031]: 2025-12-02 08:00:10.053000606 +0000 UTC m=+0.177567610 container start d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:11:59Z, container_name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1) Dec 2 03:00:10 localhost podman[53031]: 2025-12-02 08:00:10.053336734 +0000 UTC m=+0.177903718 container attach d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-central-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}5ba64817af7f9555281205611eb52d45214b5127a0e5ce894ff9b319c0723a16' Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Dec 2 03:00:10 localhost puppet-user[52063]: Warning: Empty environment setting 'TLS_PASSWORD' Dec 2 03:00:10 localhost puppet-user[52063]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8c1883a65300cc327d1cb9c34702b30b2083e07e3f42b734ab7685f1cc6449ef' Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Dec 2 03:00:10 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Dec 2 03:00:11 localhost puppet-user[53072]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:11 localhost puppet-user[53072]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:11 localhost puppet-user[53072]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:11 localhost puppet-user[53072]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Dec 2 03:00:11 localhost puppet-user[53072]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:11 localhost puppet-user[53072]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Dec 2 03:00:11 localhost puppet-user[53086]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:11 localhost puppet-user[53086]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:11 localhost puppet-user[53086]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:11 localhost puppet-user[53086]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[53086]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:11 localhost puppet-user[53086]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Dec 2 03:00:11 localhost puppet-user[53145]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:11 localhost puppet-user[53145]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Dec 2 03:00:11 localhost puppet-user[53145]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:11 localhost puppet-user[53145]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Dec 2 03:00:11 localhost puppet-user[53145]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:11 localhost puppet-user[53145]: (file & line not available) Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Dec 2 03:00:11 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Dec 2 03:00:11 localhost puppet-user[53072]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.24 seconds Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.25 seconds Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}3f62d179f65be7c16842a28abf994d6a58e30b2328fb95c74da2c0a9b9529a22' Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53370]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Dec 2 03:00:12 localhost puppet-user[53072]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Dec 2 03:00:12 localhost ovs-vsctl[53372]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Dec 2 03:00:12 localhost puppet-user[53072]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Dec 2 03:00:12 localhost puppet-user[53145]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Dec 2 03:00:12 localhost puppet-user[53072]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}c3b156c2c9f08abc530e7f7185a0499af26dcb54a74ced688ff254968e5ee0ca' Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Dec 2 03:00:12 localhost puppet-user[53072]: Notice: Applied catalog in 0.12 seconds Dec 2 03:00:12 localhost puppet-user[53072]: Application: Dec 2 03:00:12 localhost puppet-user[53072]: Initial environment: production Dec 2 03:00:12 localhost puppet-user[53072]: Converged environment: production Dec 2 03:00:12 localhost puppet-user[53072]: Run mode: user Dec 2 03:00:12 localhost puppet-user[53072]: Changes: Dec 2 03:00:12 localhost puppet-user[53072]: Total: 3 Dec 2 03:00:12 localhost puppet-user[53072]: Events: Dec 2 03:00:12 localhost puppet-user[53072]: Success: 3 Dec 2 03:00:12 localhost puppet-user[53072]: Total: 3 Dec 2 03:00:12 localhost puppet-user[53072]: Resources: Dec 2 03:00:12 localhost puppet-user[53072]: Skipped: 11 Dec 2 03:00:12 localhost puppet-user[53072]: Changed: 3 Dec 2 03:00:12 localhost puppet-user[53072]: Out of sync: 3 Dec 2 03:00:12 localhost puppet-user[53072]: Total: 25 Dec 2 03:00:12 localhost puppet-user[53072]: Time: Dec 2 03:00:12 localhost puppet-user[53072]: Concat file: 0.00 Dec 2 03:00:12 localhost puppet-user[53072]: Concat fragment: 0.00 Dec 2 03:00:12 localhost puppet-user[53072]: File: 0.02 Dec 2 03:00:12 localhost puppet-user[53072]: Transaction evaluation: 0.11 Dec 2 03:00:12 localhost puppet-user[53072]: Catalog application: 0.12 Dec 2 03:00:12 localhost puppet-user[53072]: Config retrieval: 0.28 Dec 2 03:00:12 localhost puppet-user[53072]: Last run: 1764662412 Dec 2 03:00:12 localhost puppet-user[53072]: Total: 0.12 Dec 2 03:00:12 localhost puppet-user[53072]: Version: Dec 2 03:00:12 localhost puppet-user[53072]: Config: 1764662411 Dec 2 03:00:12 localhost puppet-user[53072]: Puppet: 7.10.0 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53374]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.107 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53379]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005541913.localdomain Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005541913.novalocal' to 'np0005541913.localdomain' Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53385]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53390]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53392]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.37 seconds Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53394]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53403]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53412]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53414]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:9a:ba:cf Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53421]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Dec 2 03:00:12 localhost systemd[1]: libpod-f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a.scope: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: libpod-f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a.scope: Consumed 2.393s CPU time. Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53429]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost ovs-vsctl[53442]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Dec 2 03:00:12 localhost podman[53431]: 2025-12-02 08:00:12.50721844 +0000 UTC m=+0.043563851 container died f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-rsyslog, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Dec 2 03:00:12 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Dec 2 03:00:12 localhost systemd[1]: tmp-crun.IakQEG.mount: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: var-lib-containers-storage-overlay-5908dabcdc4beecd14375872c1a5b4a4e28c3db557b9e42f64a01ed422f93ce2-merged.mount: Deactivated successfully. Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:12 localhost puppet-user[53086]: Notice: Applied catalog in 0.51 seconds Dec 2 03:00:12 localhost puppet-user[53086]: Application: Dec 2 03:00:12 localhost puppet-user[53086]: Initial environment: production Dec 2 03:00:12 localhost puppet-user[53086]: Converged environment: production Dec 2 03:00:12 localhost puppet-user[53086]: Run mode: user Dec 2 03:00:12 localhost puppet-user[53086]: Changes: Dec 2 03:00:12 localhost puppet-user[53086]: Total: 14 Dec 2 03:00:12 localhost puppet-user[53086]: Events: Dec 2 03:00:12 localhost puppet-user[53086]: Success: 14 Dec 2 03:00:12 localhost puppet-user[53086]: Total: 14 Dec 2 03:00:12 localhost puppet-user[53086]: Resources: Dec 2 03:00:12 localhost puppet-user[53086]: Skipped: 12 Dec 2 03:00:12 localhost puppet-user[53086]: Changed: 14 Dec 2 03:00:12 localhost puppet-user[53086]: Out of sync: 14 Dec 2 03:00:12 localhost puppet-user[53086]: Total: 29 Dec 2 03:00:12 localhost puppet-user[53086]: Time: Dec 2 03:00:12 localhost puppet-user[53086]: Exec: 0.02 Dec 2 03:00:12 localhost puppet-user[53086]: Config retrieval: 0.29 Dec 2 03:00:12 localhost puppet-user[53086]: Vs config: 0.40 Dec 2 03:00:12 localhost puppet-user[53086]: Transaction evaluation: 0.44 Dec 2 03:00:12 localhost puppet-user[53086]: Catalog application: 0.51 Dec 2 03:00:12 localhost puppet-user[53086]: Last run: 1764662412 Dec 2 03:00:12 localhost puppet-user[53086]: Total: 0.51 Dec 2 03:00:12 localhost puppet-user[53086]: Version: Dec 2 03:00:12 localhost puppet-user[53086]: Config: 1764662411 Dec 2 03:00:12 localhost puppet-user[53086]: Puppet: 7.10.0 Dec 2 03:00:12 localhost podman[53431]: 2025-12-02 08:00:12.597734749 +0000 UTC m=+0.134080150 container cleanup f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=container-puppet-rsyslog, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:12 localhost puppet-user[53145]: Notice: Applied catalog in 0.44 seconds Dec 2 03:00:12 localhost puppet-user[53145]: Application: Dec 2 03:00:12 localhost puppet-user[53145]: Initial environment: production Dec 2 03:00:12 localhost puppet-user[53145]: Converged environment: production Dec 2 03:00:12 localhost puppet-user[53145]: Run mode: user Dec 2 03:00:12 localhost puppet-user[53145]: Changes: Dec 2 03:00:12 localhost puppet-user[53145]: Total: 31 Dec 2 03:00:12 localhost puppet-user[53145]: Events: Dec 2 03:00:12 localhost puppet-user[53145]: Success: 31 Dec 2 03:00:12 localhost puppet-user[53145]: Total: 31 Dec 2 03:00:12 localhost puppet-user[53145]: Resources: Dec 2 03:00:12 localhost puppet-user[53145]: Skipped: 22 Dec 2 03:00:12 localhost puppet-user[53145]: Changed: 31 Dec 2 03:00:12 localhost puppet-user[53145]: Out of sync: 31 Dec 2 03:00:12 localhost puppet-user[53145]: Total: 151 Dec 2 03:00:12 localhost puppet-user[53145]: Time: Dec 2 03:00:12 localhost puppet-user[53145]: Package: 0.03 Dec 2 03:00:12 localhost puppet-user[53145]: Ceilometer config: 0.34 Dec 2 03:00:12 localhost puppet-user[53145]: Transaction evaluation: 0.43 Dec 2 03:00:12 localhost puppet-user[53145]: Catalog application: 0.44 Dec 2 03:00:12 localhost puppet-user[53145]: Config retrieval: 0.44 Dec 2 03:00:12 localhost puppet-user[53145]: Last run: 1764662412 Dec 2 03:00:12 localhost puppet-user[53145]: Resources: 0.00 Dec 2 03:00:12 localhost puppet-user[53145]: Total: 0.44 Dec 2 03:00:12 localhost puppet-user[53145]: Version: Dec 2 03:00:12 localhost puppet-user[53145]: Config: 1764662411 Dec 2 03:00:12 localhost puppet-user[53145]: Puppet: 7.10.0 Dec 2 03:00:12 localhost systemd[1]: libpod-09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d.scope: Deactivated successfully. Dec 2 03:00:12 localhost systemd[1]: libpod-09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d.scope: Consumed 2.790s CPU time. Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Dec 2 03:00:13 localhost systemd[1]: libpod-d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257.scope: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: libpod-d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257.scope: Consumed 2.949s CPU time. Dec 2 03:00:13 localhost podman[53031]: 2025-12-02 08:00:13.195226193 +0000 UTC m=+3.319793187 container died d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, container_name=container-puppet-ceilometer, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-central-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, architecture=x86_64, build-date=2025-11-19T00:11:59Z, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:13 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-f8196f31e8f2465df3ff647e63a0beabeba6e7bb16edc123f88deea6cfc5636a.scope: Deactivated successfully. Dec 2 03:00:13 localhost podman[52972]: 2025-12-02 08:00:13.251166877 +0000 UTC m=+3.764159611 container died 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-86f9ede822be11b60c0a1703a4ec9607dd292d56847ed8465c37bae8fb9e0d08-merged.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:13 localhost podman[53501]: 2025-12-02 08:00:13.697064771 +0000 UTC m=+0.737871489 container cleanup 09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Dec 2 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-5e7aea19432089756ed62f0f30cfa5a3f11dba2345bf487cdfbd5c2a4914be89-merged.mount: Deactivated successfully. Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-09803f8cf8cd21b86529e70478133611d0fc830d7e92da00d23b8653587bd24d.scope: Deactivated successfully. Dec 2 03:00:13 localhost podman[53117]: 2025-12-02 08:00:10.100671404 +0000 UTC m=+0.041426046 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Dec 2 03:00:13 localhost podman[53539]: 2025-12-02 08:00:13.762124704 +0000 UTC m=+0.552097065 container cleanup d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-type=git, build-date=2025-11-19T00:11:59Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ceilometer-central-container) Dec 2 03:00:13 localhost systemd[1]: libpod-conmon-d3d5f6f441933c37b0dad78090286e10a59e8be704d0fdf20e3b61324b5b9257.scope: Deactivated successfully. Dec 2 03:00:13 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Dec 2 03:00:13 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Dec 2 03:00:13 localhost podman[53615]: 2025-12-02 08:00:13.983010997 +0000 UTC m=+0.103928302 container create 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=container-puppet-neutron, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 03:00:14 localhost systemd[1]: Started libpod-conmon-53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db.scope. Dec 2 03:00:14 localhost podman[53615]: 2025-12-02 08:00:13.931046746 +0000 UTC m=+0.051964071 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:14 localhost systemd[1]: Started libcrun container. Dec 2 03:00:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f5af374b13f82b9f4d3d5847d5882ab5c5f129a64a44d0b3384933c5aad231/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:14 localhost podman[53615]: 2025-12-02 08:00:14.051830119 +0000 UTC m=+0.172747444 container init 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, container_name=container-puppet-neutron, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:23:27Z, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-server-container) Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Dec 2 03:00:14 localhost podman[53615]: 2025-12-02 08:00:14.064417579 +0000 UTC m=+0.185334884 container start 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, container_name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:00:14 localhost podman[53615]: 2025-12-02 08:00:14.064811079 +0000 UTC m=+0.185728444 container attach 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:23:27Z, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-server-container, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, vcs-type=git, container_name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Dec 2 03:00:14 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Dec 2 03:00:14 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Dec 2 03:00:15 localhost puppet-user[52063]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3a12438802493a75725c4f7704f2af6db1ef72af396369e5de28f6f4d6a7ed98' Dec 2 03:00:15 localhost puppet-user[52063]: Notice: Applied catalog in 4.93 seconds Dec 2 03:00:15 localhost puppet-user[52063]: Application: Dec 2 03:00:15 localhost puppet-user[52063]: Initial environment: production Dec 2 03:00:15 localhost puppet-user[52063]: Converged environment: production Dec 2 03:00:15 localhost puppet-user[52063]: Run mode: user Dec 2 03:00:15 localhost puppet-user[52063]: Changes: Dec 2 03:00:15 localhost puppet-user[52063]: Total: 183 Dec 2 03:00:15 localhost puppet-user[52063]: Events: Dec 2 03:00:15 localhost puppet-user[52063]: Success: 183 Dec 2 03:00:15 localhost puppet-user[52063]: Total: 183 Dec 2 03:00:15 localhost puppet-user[52063]: Resources: Dec 2 03:00:15 localhost puppet-user[52063]: Changed: 183 Dec 2 03:00:15 localhost puppet-user[52063]: Out of sync: 183 Dec 2 03:00:15 localhost puppet-user[52063]: Skipped: 57 Dec 2 03:00:15 localhost puppet-user[52063]: Total: 487 Dec 2 03:00:15 localhost puppet-user[52063]: Time: Dec 2 03:00:15 localhost puppet-user[52063]: Concat file: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: Concat fragment: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: Anchor: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: File line: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: Virtlogd config: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: Virtqemud config: 0.01 Dec 2 03:00:15 localhost puppet-user[52063]: Virtsecretd config: 0.01 Dec 2 03:00:15 localhost puppet-user[52063]: Virtnodedevd config: 0.01 Dec 2 03:00:15 localhost puppet-user[52063]: Virtstoraged config: 0.02 Dec 2 03:00:15 localhost puppet-user[52063]: Exec: 0.02 Dec 2 03:00:15 localhost puppet-user[52063]: Package: 0.02 Dec 2 03:00:15 localhost puppet-user[52063]: Virtproxyd config: 0.03 Dec 2 03:00:15 localhost puppet-user[52063]: File: 0.03 Dec 2 03:00:15 localhost puppet-user[52063]: Augeas: 1.22 Dec 2 03:00:15 localhost puppet-user[52063]: Config retrieval: 1.65 Dec 2 03:00:15 localhost puppet-user[52063]: Last run: 1764662415 Dec 2 03:00:15 localhost puppet-user[52063]: Nova config: 3.33 Dec 2 03:00:15 localhost puppet-user[52063]: Transaction evaluation: 4.92 Dec 2 03:00:15 localhost puppet-user[52063]: Catalog application: 4.93 Dec 2 03:00:15 localhost puppet-user[52063]: Resources: 0.00 Dec 2 03:00:15 localhost puppet-user[52063]: Total: 4.94 Dec 2 03:00:15 localhost puppet-user[52063]: Version: Dec 2 03:00:15 localhost puppet-user[52063]: Config: 1764662408 Dec 2 03:00:15 localhost puppet-user[52063]: Puppet: 7.10.0 Dec 2 03:00:15 localhost puppet-user[53660]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Dec 2 03:00:15 localhost puppet-user[53660]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:00:15 localhost puppet-user[53660]: (file: /etc/puppet/hiera.yaml) Dec 2 03:00:15 localhost puppet-user[53660]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:00:15 localhost puppet-user[53660]: (file & line not available) Dec 2 03:00:15 localhost puppet-user[53660]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:00:15 localhost puppet-user[53660]: (file & line not available) Dec 2 03:00:15 localhost systemd[1]: libpod-79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d.scope: Deactivated successfully. Dec 2 03:00:15 localhost systemd[1]: libpod-79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d.scope: Consumed 9.035s CPU time. Dec 2 03:00:15 localhost podman[51914]: 2025-12-02 08:00:15.984103998 +0000 UTC m=+10.990518515 container died 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, release=1761123044, container_name=container-puppet-nova_libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:00:15 localhost puppet-user[53660]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Dec 2 03:00:16 localhost systemd[1]: tmp-crun.qCzK4e.mount: Deactivated successfully. Dec 2 03:00:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:16 localhost systemd[1]: var-lib-containers-storage-overlay-104925f4f3140d86c4d76991cbbe20b0ea2114e629deebdf08f0de90504ded5f-merged.mount: Deactivated successfully. Dec 2 03:00:16 localhost podman[53796]: 2025-12-02 08:00:16.152757484 +0000 UTC m=+0.158639715 container cleanup 79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, container_name=container-puppet-nova_libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:16 localhost systemd[1]: libpod-conmon-79e7bfa253c8a0aa3056c562963a2fcae1d6b8bb4029380f0fb9891fb44c522d.scope: Deactivated successfully. Dec 2 03:00:16 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:00:16 localhost puppet-user[53660]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.64 seconds Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 2 03:00:16 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 2 03:00:17 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Dec 2 03:00:17 localhost puppet-user[53660]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Dec 2 03:00:17 localhost puppet-user[53660]: Notice: Applied catalog in 0.45 seconds Dec 2 03:00:17 localhost puppet-user[53660]: Application: Dec 2 03:00:17 localhost puppet-user[53660]: Initial environment: production Dec 2 03:00:17 localhost puppet-user[53660]: Converged environment: production Dec 2 03:00:17 localhost puppet-user[53660]: Run mode: user Dec 2 03:00:17 localhost puppet-user[53660]: Changes: Dec 2 03:00:17 localhost puppet-user[53660]: Total: 33 Dec 2 03:00:17 localhost puppet-user[53660]: Events: Dec 2 03:00:17 localhost puppet-user[53660]: Success: 33 Dec 2 03:00:17 localhost puppet-user[53660]: Total: 33 Dec 2 03:00:17 localhost puppet-user[53660]: Resources: Dec 2 03:00:17 localhost puppet-user[53660]: Skipped: 21 Dec 2 03:00:17 localhost puppet-user[53660]: Changed: 33 Dec 2 03:00:17 localhost puppet-user[53660]: Out of sync: 33 Dec 2 03:00:17 localhost puppet-user[53660]: Total: 155 Dec 2 03:00:17 localhost puppet-user[53660]: Time: Dec 2 03:00:17 localhost puppet-user[53660]: Resources: 0.00 Dec 2 03:00:17 localhost puppet-user[53660]: Ovn metadata agent config: 0.02 Dec 2 03:00:17 localhost puppet-user[53660]: Neutron config: 0.37 Dec 2 03:00:17 localhost puppet-user[53660]: Transaction evaluation: 0.44 Dec 2 03:00:17 localhost puppet-user[53660]: Catalog application: 0.45 Dec 2 03:00:17 localhost puppet-user[53660]: Config retrieval: 0.71 Dec 2 03:00:17 localhost puppet-user[53660]: Last run: 1764662417 Dec 2 03:00:17 localhost puppet-user[53660]: Total: 0.45 Dec 2 03:00:17 localhost puppet-user[53660]: Version: Dec 2 03:00:17 localhost puppet-user[53660]: Config: 1764662415 Dec 2 03:00:17 localhost puppet-user[53660]: Puppet: 7.10.0 Dec 2 03:00:17 localhost systemd[1]: libpod-53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db.scope: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: libpod-53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db.scope: Consumed 3.473s CPU time. Dec 2 03:00:17 localhost podman[53615]: 2025-12-02 08:00:17.728738424 +0000 UTC m=+3.849655759 container died 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.12, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_puppet_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, architecture=x86_64) Dec 2 03:00:17 localhost systemd[1]: tmp-crun.zTntEm.mount: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:17 localhost systemd[1]: var-lib-containers-storage-overlay-73f5af374b13f82b9f4d3d5847d5882ab5c5f129a64a44d0b3384933c5aad231-merged.mount: Deactivated successfully. Dec 2 03:00:17 localhost podman[53868]: 2025-12-02 08:00:17.863527593 +0000 UTC m=+0.126254936 container cleanup 53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:23:27Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, name=rhosp17/openstack-neutron-server, version=17.1.12, url=https://www.redhat.com) Dec 2 03:00:17 localhost systemd[1]: libpod-conmon-53e24d9c63e51791534ba5dbbc87c43b7936a07e3fdb5942196f097973e8e4db.scope: Deactivated successfully. Dec 2 03:00:17 localhost python3[51707]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541913 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541913', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 2 03:00:18 localhost python3[53922]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:19 localhost python3[53954]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:00:20 localhost python3[54004]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:20 localhost python3[54047]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662419.8885703-83456-143487100285342/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:20 localhost python3[54109]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:21 localhost python3[54152]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662420.709919-83456-152773432819666/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:21 localhost python3[54214]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:22 localhost python3[54257]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662421.6082842-83556-63790121439300/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:22 localhost python3[54319]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:23 localhost python3[54362]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662422.4676592-83585-164176831914883/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:23 localhost python3[54392]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:23 localhost systemd[1]: Reloading. Dec 2 03:00:23 localhost systemd-rc-local-generator[54417]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:23 localhost systemd-sysv-generator[54422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:23 localhost systemd[1]: Reloading. Dec 2 03:00:24 localhost systemd-sysv-generator[54458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:24 localhost systemd-rc-local-generator[54454]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:24 localhost systemd[1]: Starting TripleO Container Shutdown... Dec 2 03:00:24 localhost systemd[1]: Finished TripleO Container Shutdown. Dec 2 03:00:24 localhost python3[54517]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:25 localhost python3[54560]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662424.4535828-83634-128300938039233/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:25 localhost python3[54622]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:00:26 localhost python3[54665]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662425.348438-83704-89717137302983/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:26 localhost python3[54695]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:26 localhost systemd[1]: Reloading. Dec 2 03:00:26 localhost systemd-rc-local-generator[54722]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:26 localhost systemd-sysv-generator[54726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:26 localhost systemd[1]: Reloading. Dec 2 03:00:26 localhost systemd-sysv-generator[54764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:26 localhost systemd-rc-local-generator[54761]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:27 localhost systemd[1]: Starting Create netns directory... Dec 2 03:00:27 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:00:27 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:00:27 localhost systemd[1]: Finished Create netns directory. Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 36af2f1ef63ece3c88eb676f44e9c36d Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: 4767aaabc3de112d8791c290aa2b669d Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 230f4ebc92ecc6f511b0217abb58f1b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 1c70cec5d3310de4d4589e1a95c8fd3c Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 72848ce4d815e5b4e89ff3e01c5f9f7e Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 72848ce4d815e5b4e89ff3e01c5f9f7e Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: d1544001d5773d0045aaf61439ef5e02 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:27 localhost python3[54788]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: ff8ff724cb5f0d02131158e2fae849b6 Dec 2 03:00:29 localhost python3[54846]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.275916041 +0000 UTC m=+0.057764924 container create 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:00:29 localhost systemd[1]: Started libpod-conmon-61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d.scope. Dec 2 03:00:29 localhost systemd[1]: Started libcrun container. Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.24494354 +0000 UTC m=+0.026792403 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/804deff8dacbfc312114476fef5e5066b58626df118d8072d88e0a05fadba7d2/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.356569013 +0000 UTC m=+0.138417896 container init 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.370244791 +0000 UTC m=+0.152093664 container start 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.370964979 +0000 UTC m=+0.152813842 container attach 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs) Dec 2 03:00:29 localhost systemd[1]: libpod-61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d.scope: Deactivated successfully. Dec 2 03:00:29 localhost podman[54884]: 2025-12-02 08:00:29.375347685 +0000 UTC m=+0.157196538 container died 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, tcib_managed=true, container_name=metrics_qdr_init_logs, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:00:29 localhost podman[54903]: 2025-12-02 08:00:29.449040044 +0000 UTC m=+0.060193198 container cleanup 61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, container_name=metrics_qdr_init_logs, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 2 03:00:29 localhost systemd[1]: libpod-conmon-61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d.scope: Deactivated successfully. Dec 2 03:00:29 localhost python3[54846]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Dec 2 03:00:29 localhost podman[54976]: 2025-12-02 08:00:29.916563764 +0000 UTC m=+0.092548444 container create 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:00:29 localhost systemd[1]: Started libpod-conmon-71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.scope. Dec 2 03:00:29 localhost podman[54976]: 2025-12-02 08:00:29.864549262 +0000 UTC m=+0.040533942 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:29 localhost systemd[1]: Started libcrun container. Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083325a356d009687825873f5ef80d42d8ec3a9c9ef25c5a97dbce5b8f99fa32/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083325a356d009687825873f5ef80d42d8ec3a9c9ef25c5a97dbce5b8f99fa32/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 2 03:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:00:30 localhost podman[54976]: 2025-12-02 08:00:30.018578924 +0000 UTC m=+0.194563604 container init 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4) Dec 2 03:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:00:30 localhost podman[54976]: 2025-12-02 08:00:30.06348667 +0000 UTC m=+0.239471340 container start 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr) Dec 2 03:00:30 localhost python3[54846]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=36af2f1ef63ece3c88eb676f44e9c36d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 2 03:00:30 localhost podman[54999]: 2025-12-02 08:00:30.165562983 +0000 UTC m=+0.092806282 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 03:00:30 localhost systemd[1]: var-lib-containers-storage-overlay-804deff8dacbfc312114476fef5e5066b58626df118d8072d88e0a05fadba7d2-merged.mount: Deactivated successfully. Dec 2 03:00:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61bfb0f60bf1d340e284e5b4c8d783ad999ed782a1edb4c5af3cfb351fa36a5d-userdata-shm.mount: Deactivated successfully. Dec 2 03:00:30 localhost podman[54999]: 2025-12-02 08:00:30.430509119 +0000 UTC m=+0.357752438 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:00:30 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:00:30 localhost python3[55074]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:31 localhost python3[55090]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:00:31 localhost python3[55151]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662431.1264088-83886-30778362098085/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:32 localhost python3[55167]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:00:32 localhost systemd[1]: Reloading. Dec 2 03:00:32 localhost systemd-rc-local-generator[55193]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:32 localhost systemd-sysv-generator[55196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:32 localhost python3[55220]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:00:33 localhost systemd[1]: Reloading. Dec 2 03:00:33 localhost systemd-sysv-generator[55253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:00:33 localhost systemd-rc-local-generator[55250]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:00:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:00:33 localhost systemd[1]: Starting metrics_qdr container... Dec 2 03:00:33 localhost systemd[1]: Started metrics_qdr container. Dec 2 03:00:33 localhost python3[55300]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:35 localhost python3[55421]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005541913 step=1 update_config_hash_only=False Dec 2 03:00:35 localhost python3[55437]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:00:36 localhost python3[55453]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:01:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:01:01 localhost systemd[1]: tmp-crun.P3mFVZ.mount: Deactivated successfully. Dec 2 03:01:01 localhost podman[55465]: 2025-12-02 08:01:01.453013061 +0000 UTC m=+0.090758447 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:01:01 localhost podman[55465]: 2025-12-02 08:01:01.64854382 +0000 UTC m=+0.286289186 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 2 03:01:01 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:01:32 localhost systemd[1]: tmp-crun.uRIXiN.mount: Deactivated successfully. Dec 2 03:01:32 localhost podman[55570]: 2025-12-02 08:01:32.430404454 +0000 UTC m=+0.074521973 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:01:32 localhost podman[55570]: 2025-12-02 08:01:32.657507598 +0000 UTC m=+0.301625147 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 2 03:01:32 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:02:03 localhost systemd[1]: tmp-crun.lNcr2b.mount: Deactivated successfully. Dec 2 03:02:03 localhost podman[55600]: 2025-12-02 08:02:03.448484336 +0000 UTC m=+0.088822397 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, release=1761123044) Dec 2 03:02:03 localhost podman[55600]: 2025-12-02 08:02:03.629221796 +0000 UTC m=+0.269559787 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:02:03 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:02:34 localhost systemd[1]: tmp-crun.FRQuhw.mount: Deactivated successfully. Dec 2 03:02:34 localhost podman[55708]: 2025-12-02 08:02:34.43349622 +0000 UTC m=+0.077228116 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:02:34 localhost podman[55708]: 2025-12-02 08:02:34.653193387 +0000 UTC m=+0.296925263 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Dec 2 03:02:34 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:03:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:03:05 localhost podman[55736]: 2025-12-02 08:03:05.427082421 +0000 UTC m=+0.069575535 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, vcs-type=git) Dec 2 03:03:05 localhost podman[55736]: 2025-12-02 08:03:05.58792869 +0000 UTC m=+0.230421764 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 2 03:03:05 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:03:36 localhost systemd[1]: tmp-crun.v5C9YS.mount: Deactivated successfully. Dec 2 03:03:36 localhost podman[55842]: 2025-12-02 08:03:36.42171774 +0000 UTC m=+0.063716654 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:03:36 localhost podman[55842]: 2025-12-02 08:03:36.612663191 +0000 UTC m=+0.254662145 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=) Dec 2 03:03:36 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:04:07 localhost systemd[1]: tmp-crun.fid941.mount: Deactivated successfully. Dec 2 03:04:07 localhost podman[55870]: 2025-12-02 08:04:07.454684154 +0000 UTC m=+0.093148411 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:04:07 localhost podman[55870]: 2025-12-02 08:04:07.676795375 +0000 UTC m=+0.315259562 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:04:07 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:04:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:04:38 localhost systemd[1]: tmp-crun.ucPcpL.mount: Deactivated successfully. Dec 2 03:04:38 localhost podman[55977]: 2025-12-02 08:04:38.450530849 +0000 UTC m=+0.091501677 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:04:38 localhost podman[55977]: 2025-12-02 08:04:38.664046879 +0000 UTC m=+0.305017677 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd) Dec 2 03:04:38 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:05:09 localhost systemd[1]: tmp-crun.mgmfYw.mount: Deactivated successfully. Dec 2 03:05:09 localhost podman[56006]: 2025-12-02 08:05:09.436674314 +0000 UTC m=+0.074865520 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:05:09 localhost podman[56006]: 2025-12-02 08:05:09.637589916 +0000 UTC m=+0.275781102 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 03:05:09 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:05:28 localhost sshd[56112]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:05:28 localhost ceph-osd[32582]: osd.3 pg_epoch: 21 pg[2.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [4,5,3] r=2 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:30 localhost ceph-osd[31622]: osd.0 pg_epoch: 23 pg[3.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [5,4,0] r=2 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:31 localhost ceph-osd[32582]: osd.3 pg_epoch: 25 pg[4.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [3,4,5] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:32 localhost ceph-osd[32582]: osd.3 pg_epoch: 26 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [3,4,5] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:35 localhost ceph-osd[32582]: osd.3 pg_epoch: 27 pg[5.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [2,3,4] r=1 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:05:40 localhost systemd[1]: tmp-crun.PtyjfM.mount: Deactivated successfully. Dec 2 03:05:40 localhost podman[56115]: 2025-12-02 08:05:40.431229503 +0000 UTC m=+0.074759768 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1) Dec 2 03:05:40 localhost podman[56115]: 2025-12-02 08:05:40.646552332 +0000 UTC m=+0.290082647 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64) Dec 2 03:05:40 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:05:41 localhost ceph-osd[32582]: osd.3 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.261794090s) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 1117.898559570s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:41 localhost ceph-osd[32582]: osd.3 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.258399010s) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1117.898559570s@ mbc={}] state: transitioning to Stray Dec 2 03:05:41 localhost ceph-osd[31622]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.721211433s) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active pruub 1124.780761719s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:41 localhost ceph-osd[31622]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.718441963s) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1124.780761719s@ mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[32582]: osd.3 pg_epoch: 34 pg[2.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=2 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.18( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.19( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.16( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.13( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.15( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.14( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.11( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.2( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.5( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.4( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.3( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.7( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.8( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.9( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:42 localhost ceph-osd[31622]: osd.0 pg_epoch: 34 pg[3.10( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:43 localhost ceph-osd[32582]: osd.3 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.097516060s) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active pruub 1121.829223633s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:43 localhost ceph-osd[32582]: osd.3 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.097516060s) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown pruub 1121.829223633s@ mbc={}] state: transitioning to Primary Dec 2 03:05:43 localhost ceph-osd[32582]: osd.3 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.465136528s) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 1124.201416016s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:43 localhost ceph-osd[32582]: osd.3 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.461248398s) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1124.201416016s@ mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.18( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.18( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.2( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.3( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.5( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.7( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.6( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.5( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.9( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.8( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.19( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.4( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.10( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.11( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.2( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.11( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.10( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.12( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.3( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.13( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.12( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.13( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.14( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.15( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.17( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.19( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.16( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[5.1c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.15( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.14( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.17( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.9( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.16( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.8( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.6( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.7( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.4( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.0( empty local-lis/les=35/36 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:44 localhost ceph-osd[32582]: osd.3 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=0 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:46 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts Dec 2 03:05:46 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,4,2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654157639s) [0,1,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085693359s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654157639s) [0,1,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.085693359s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654387474s) [5,4,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085937500s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654333115s) [5,4,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085937500s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654165268s) [3,2,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085815430s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.654087067s) [3,2,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085815430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653826714s) [3,4,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085571289s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653568268s) [2,0,4] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085571289s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653591156s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085571289s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653493881s) [3,5,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085571289s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653387070s) [2,0,4] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085571289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653527260s) [3,4,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085571289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653422356s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085571289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653345108s) [3,5,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085571289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653051376s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085327148s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.653051376s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.085327148s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652811050s) [4,0,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085205078s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652792931s) [4,0,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085205078s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652827263s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085327148s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652703285s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085205078s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652594566s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085083008s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652567863s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085083008s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652786255s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085327148s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652657509s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085205078s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652359009s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085083008s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652342796s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085083008s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652385712s) [0,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085083008s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652385712s) [0,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.085083008s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651956558s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084960938s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651874542s) [1,5,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084838867s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651953697s) [2,4,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084960938s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651887894s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084960938s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651806831s) [1,5,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084838867s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652468681s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085571289s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651900291s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084960938s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651865005s) [2,4,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084960938s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652619362s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.085815430s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652565002s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085815430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651395798s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084716797s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651395798s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.084716797s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651181221s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084594727s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651185036s) [1,2,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084594727s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651831627s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084960938s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651161194s) [1,2,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084594727s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651131630s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084594727s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650981903s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084594727s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651112556s) [2,1,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084716797s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650949478s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084594727s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651082993s) [2,1,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084716797s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651031494s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084716797s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651031494s) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.084716797s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650726318s) [0,1,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084350586s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651062012s) [5,3,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084838867s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650726318s) [0,1,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.084350586s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.651031494s) [5,3,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084838867s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650574684s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084350586s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650166512s) [5,4,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1128.084106445s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650369644s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084350586s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.650120735s) [5,4,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.084106445s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.652303696s) [4,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.085571289s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.a( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774377823s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836425781s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768483162s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.830810547s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768430710s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.830810547s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766879082s) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.829223633s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766879082s) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.829223633s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774645805s) [2,0,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.837158203s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774545670s) [2,0,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.837158203s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.4( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774307251s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.836425781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772085190s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835449219s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772748947s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.834838867s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765979767s) [3,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.829467773s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771361351s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.834838867s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765979767s) [3,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.829467773s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765991211s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.829589844s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765943527s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.829589844s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772041321s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835449219s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.2( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771343231s) [2,4,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836425781s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771301270s) [2,4,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.836425781s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.b( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769821167s) [0,2,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835327148s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765130997s) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.830810547s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769781113s) [0,2,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835327148s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768354416s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.833984375s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765130997s) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.830810547s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769432068s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836181641s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769432068s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.836181641s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768296242s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.833984375s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769285202s) [3,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836303711s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769285202s) [3,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.836303711s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766729355s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.833862305s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766638756s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.833862305s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768626213s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835815430s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768581390s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835815430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769913673s) [1,3,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.837280273s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769865036s) [1,3,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.837280273s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768770218s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836303711s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768734932s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.836303711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.779154778s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847656250s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.779109955s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847656250s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766963005s) [4,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835815430s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766510963s) [4,5,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835449219s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766464233s) [4,5,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835449219s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.778610229s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847656250s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636725426s) [1,5,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705932617s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636270523s) [5,1,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705688477s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766930580s) [4,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835815430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636182785s) [5,1,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705688477s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.778536797s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847656250s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.778059006s) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847778320s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.778059006s) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.847778320s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.777619362s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847656250s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.777619362s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.847656250s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635581970s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705688477s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635509491s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705688477s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764682770s) [5,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835083008s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764632225s) [5,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835083008s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.777222633s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847778320s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636683464s) [1,5,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705932617s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.777153015s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847778320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634814262s) [3,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705566406s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634751320s) [4,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705566406s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634723663s) [4,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705566406s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776627541s) [0,5,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847534180s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776589394s) [0,5,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847534180s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635031700s) [5,0,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705444336s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634213448s) [2,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705444336s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634177208s) [2,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705444336s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775684357s) [5,3,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847167969s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765464783s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836303711s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634814262s) [3,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.705566406s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764733315s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.836303711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764796257s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.836303711s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764726639s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.836303711s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633594513s) [5,3,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705322266s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633567810s) [5,3,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705322266s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634274483s) [5,0,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705444336s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775888443s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847656250s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633378983s) [4,3,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705200195s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775915146s) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847778320s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633273125s) [4,3,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705200195s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775915146s) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.847778320s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633278847s) [2,0,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705322266s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633233070s) [2,0,4] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705322266s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775649071s) [5,3,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847167969s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775273323s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847534180s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775230408s) [0,4,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847534180s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775558472s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847778320s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775522232s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847778320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632546425s) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704956055s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775401115s) [5,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847778320s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775245667s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847656250s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632546425s) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.704956055s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775337219s) [5,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847778320s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631945610s) [2,4,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705200195s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631910324s) [2,4,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705200195s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631632805s) [5,1,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705078125s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,1] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631601334s) [5,1,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705078125s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.759953499s) [5,0,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.833496094s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.759929657s) [5,0,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.833496094s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773355484s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.847045898s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773309708s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.847045898s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630913734s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704711914s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630885124s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704711914s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772572517s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846557617s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630796432s) [2,3,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704711914s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630754471s) [2,3,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704711914s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772478104s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846557617s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772571564s) [4,3,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846801758s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772540092s) [4,3,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846801758s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631366730s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705200195s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760310173s) [5,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.834594727s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772434235s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846801758s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760278702s) [5,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.834594727s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772386551s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846801758s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630822182s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705200195s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766405106s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840820312s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.755390167s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.829833984s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630205154s) [4,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704589844s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.755354881s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.829833984s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772210121s) [5,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846801758s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630124092s) [4,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704589844s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772170067s) [5,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846801758s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761236191s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835815430s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761205673s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835815430s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629482269s) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704223633s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630090714s) [4,2,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704956055s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766370773s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840820312s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629482269s) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.704223633s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630060196s) [4,2,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704956055s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771953583s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846923828s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629543304s) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704589844s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629543304s) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.704589844s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771477699s) [0,5,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846557617s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629158020s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704467773s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771430016s) [0,5,4] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846557617s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629124641s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704467773s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771115303s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846557617s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628822327s) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704345703s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771069527s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846557617s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628822327s) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.704345703s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765193939s) [2,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840820312s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.759393692s) [4,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.835083008s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.759354591s) [4,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.835083008s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628469467s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704223633s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765112877s) [2,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840820312s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628434181s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704223633s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764847755s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840820312s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761003494s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.837036133s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628332138s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.704345703s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760964394s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.837036133s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628293037s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.704345703s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.622431755s) [3,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.698730469s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.622431755s) [3,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.698730469s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764132500s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840698242s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764026642s) [2,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840576172s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764132500s) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.840698242s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764297485s) [2,4,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840820312s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763982773s) [2,3,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840576172s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763723373s) [4,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840576172s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760432243s) [4,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.837280273s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.621959686s) [1,5,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.698730469s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763629913s) [4,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840576172s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763553619s) [2,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840576172s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763513565s) [2,1,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840576172s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760316849s) [4,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.837280273s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.621560097s) [5,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.698852539s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769455910s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.846801758s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769421577s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846801758s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763136864s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840698242s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628329277s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705810547s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763062477s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840698242s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771667480s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.846923828s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.621520042s) [5,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.698852539s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628097534s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705810547s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627774239s) [2,4,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705688477s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627834320s) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.705810547s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627725601s) [2,4,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.705688477s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627834320s) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1123.705810547s@ mbc={}] state: transitioning to Primary Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762476921s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840576172s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762437820s) [4,3,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840576172s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762310982s) [2,3,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.840454102s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627837181s) [0,4,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1123.706054688s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762266159s) [2,3,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.840454102s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.621759415s) [1,5,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.698730469s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627794266s) [0,4,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.706054688s@ mbc={}] state: transitioning to Stray Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.750872612s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.829101562s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:05:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.749888420s) [4,2,3] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.829101562s@ mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.1b( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.1c( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,0,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,0,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,2,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.1c( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.14( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,2,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.10( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,4,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.10( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,4,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.13( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.8( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,0,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,1,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.d( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,4,0] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.e( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,0,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.10( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,4] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.1a( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,3,4] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,4,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 37 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,0] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.b( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,4] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 37 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,0,4] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,1] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,4,2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,4,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,4,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,2,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,1] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,4] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,4,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,4,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,5,4] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:05:52 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.6 scrub starts Dec 2 03:05:52 localhost ceph-osd[32582]: osd.3 pg_epoch: 39 pg[7.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1,5,3] r=2 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:05:52 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts Dec 2 03:05:54 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts Dec 2 03:05:54 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok Dec 2 03:05:55 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.1f scrub starts Dec 2 03:05:55 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.1f scrub ok Dec 2 03:05:58 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts Dec 2 03:05:58 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok Dec 2 03:06:00 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.c scrub starts Dec 2 03:06:00 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.3 scrub starts Dec 2 03:06:00 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.c scrub ok Dec 2 03:06:02 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.19 scrub starts Dec 2 03:06:02 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.19 scrub ok Dec 2 03:06:03 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.e scrub starts Dec 2 03:06:03 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.e scrub ok Dec 2 03:06:06 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.17 scrub starts Dec 2 03:06:06 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.17 scrub ok Dec 2 03:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:06:11 localhost podman[56190]: 2025-12-02 08:06:11.440954269 +0000 UTC m=+0.085390097 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Dec 2 03:06:11 localhost podman[56190]: 2025-12-02 08:06:11.629861836 +0000 UTC m=+0.274297604 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, distribution-scope=public, version=17.1.12) Dec 2 03:06:11 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:06:12 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.14 scrub starts Dec 2 03:06:12 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.14 scrub ok Dec 2 03:06:13 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.1 scrub starts Dec 2 03:06:13 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.1 scrub ok Dec 2 03:06:14 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts Dec 2 03:06:15 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.12 scrub starts Dec 2 03:06:15 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.10 scrub starts Dec 2 03:06:15 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.12 scrub ok Dec 2 03:06:18 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.19 scrub starts Dec 2 03:06:18 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.19 scrub ok Dec 2 03:06:20 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts Dec 2 03:06:20 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.4 scrub starts Dec 2 03:06:20 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok Dec 2 03:06:20 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.4 scrub ok Dec 2 03:06:21 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts Dec 2 03:06:21 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.6 scrub starts Dec 2 03:06:21 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok Dec 2 03:06:25 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.19 scrub starts Dec 2 03:06:25 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.19 scrub ok Dec 2 03:06:26 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.1e scrub starts Dec 2 03:06:26 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.1e scrub ok Dec 2 03:06:26 localhost python3[56237]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:27 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts Dec 2 03:06:27 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok Dec 2 03:06:27 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts Dec 2 03:06:27 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok Dec 2 03:06:28 localhost python3[56253]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:29 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.2 scrub starts Dec 2 03:06:29 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.2 scrub ok Dec 2 03:06:30 localhost python3[56269]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:31 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.16 scrub starts Dec 2 03:06:31 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.16 scrub ok Dec 2 03:06:31 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.9 scrub starts Dec 2 03:06:31 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.9 scrub ok Dec 2 03:06:32 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.b scrub starts Dec 2 03:06:32 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.b scrub ok Dec 2 03:06:33 localhost python3[56317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:33 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts Dec 2 03:06:33 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok Dec 2 03:06:33 localhost python3[56360]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662792.8810189-91322-236075018444877/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:38 localhost python3[56422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:38 localhost python3[56465]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662797.962291-91322-254794161285859/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=32e95cb48a0c881d4099e3645e940da5c77bc88c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:40 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts Dec 2 03:06:40 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok Dec 2 03:06:41 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.18 scrub starts Dec 2 03:06:41 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.b scrub starts Dec 2 03:06:41 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.18 scrub ok Dec 2 03:06:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:06:42 localhost podman[56480]: 2025-12-02 08:06:42.429527259 +0000 UTC m=+0.074846399 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Dec 2 03:06:42 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.7 scrub starts Dec 2 03:06:42 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.7 scrub ok Dec 2 03:06:42 localhost podman[56480]: 2025-12-02 08:06:42.624357706 +0000 UTC m=+0.269676826 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:06:42 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:06:43 localhost python3[56558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:43 localhost python3[56601]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662803.0061452-91322-107019927739205/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=ed42d7e7572ec51630a216299b8e7374862502cf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:45 localhost ceph-osd[32582]: osd.3 pg_epoch: 45 pg[7.0( v 42'39 (0'0,42'39] local-lis/les=39/40 n=22 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.633689880s) [1,5,3] r=2 lpr=45 pi=[39,45)/1 luod=0'0 lua=42'37 crt=42'39 lcod 42'38 mlcod 0'0 active pruub 1181.808227539s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:45 localhost ceph-osd[32582]: osd.3 pg_epoch: 45 pg[7.0( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.632221222s) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 0'0 unknown NOTIFY pruub 1181.808227539s@ mbc={}] state: transitioning to Stray Dec 2 03:06:45 localhost ceph-osd[31622]: osd.0 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.942852020s) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 1183.499755859s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:45 localhost ceph-osd[31622]: osd.0 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.942852020s) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1183.499755859s@ mbc={}] state: transitioning to Primary Dec 2 03:06:45 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.4 scrub starts Dec 2 03:06:45 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 3.4 scrub ok Dec 2 03:06:45 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.4 scrub starts Dec 2 03:06:45 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.4 scrub ok Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.18( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.9( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.19( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.5( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.16( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.12( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.10( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.d( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.c( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.3( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.5( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.6( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.f( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.e( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.2( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.4( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.9( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.b( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.8( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.7( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 46 pg[7.a( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=2 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.0( empty local-lis/les=45/46 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:46 localhost ceph-osd[31622]: osd.0 pg_epoch: 46 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=0 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:48 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts Dec 2 03:06:48 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok Dec 2 03:06:48 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts Dec 2 03:06:48 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok Dec 2 03:06:48 localhost python3[56663]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:49 localhost python3[56708]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662808.4760535-91675-114907344217399/source _original_basename=tmpvt8veae_ follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:49 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.1d scrub starts Dec 2 03:06:49 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.1d scrub ok Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776106834s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223754883s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775996208s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.223754883s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776648521s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224731445s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775749207s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224121094s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776539803s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.224731445s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775081635s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223388672s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775035858s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.223388672s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776295662s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224853516s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776230812s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.224853516s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775509834s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224243164s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775459290s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.224243164s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774242401s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223022461s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774214745s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.223022461s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774991035s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.224121094s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.775052071s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224243164s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774992943s) [4,2,3] r=2 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1187.224243164s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.1f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.14( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.13( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.1d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.11( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767797470s) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612915039s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769163132s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614379883s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767797470s) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.612915039s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766558647s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.611816406s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766497612s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.611816406s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769031525s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614379883s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767814636s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613281250s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767749786s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613281250s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767336845s) [4,5,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613037109s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767288208s) [4,5,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613037109s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767270088s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613037109s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767236710s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613037109s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.768362045s) [4,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614501953s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766625404s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612792969s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.768322945s) [4,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614501953s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766568184s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612792969s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766295433s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612670898s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766295433s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.612670898s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.768082619s) [4,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614379883s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766028404s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612670898s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765970230s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612670898s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767256737s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613525391s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767774582s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614624023s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765157700s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.611938477s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767678261s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614624023s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765051842s) [4,2,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.611938477s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766743660s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613525391s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765673637s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612792969s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765687943s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612915039s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765621185s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612915039s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767791748s) [4,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614379883s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765572548s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612792969s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764496803s) [4,0,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.611816406s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766647339s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614135742s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766587257s) [3,4,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614135742s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764464378s) [4,0,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.611816406s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767200470s) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614746094s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.767200470s) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.614746094s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765805244s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613525391s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765371323s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613037109s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765760422s) [3,5,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613525391s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765332222s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613037109s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764786720s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612670898s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766614914s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614624023s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764717102s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612670898s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.766586304s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614624023s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764067650s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612426758s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764037132s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612426758s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765802383s) [4,5,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614135742s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765705109s) [4,5,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614135742s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764935493s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613403320s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764935493s) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1191.613403320s@ mbc={}] state: transitioning to Primary Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764878273s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613525391s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.762330055s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.611206055s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764959335s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.613769531s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.762292862s) [5,3,4] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.611206055s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.764915466s) [5,4,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613769531s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765299797s) [5,0,1] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.614624023s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.765261650s) [5,0,1] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.614624023s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.763521194s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.613525391s@ mbc={}] state: transitioning to Stray Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.762686729s) [5,1,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1191.612792969s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:49 localhost ceph-osd[31622]: osd.0 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.762652397s) [5,1,0] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1191.612792969s@ mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost python3[56770]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.19( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.1e( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,1,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,1,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,3] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.1c( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,3,4] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 47 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:06:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 48 pg[6.16( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.c( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 48 pg[6.10( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 48 pg[6.18( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[31622]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,2,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.f( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.4( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.11( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,4] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.6( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.1d( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.1f( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.14( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,4,5] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.b( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost ceph-osd[32582]: osd.3 pg_epoch: 48 pg[6.13( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:50 localhost python3[56813]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662810.0898514-91774-79578624410722/source _original_basename=tmpu62kul09 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:06:51 localhost python3[56843]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703760147s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223388672s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703334808s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223144531s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703760147s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1187.223388672s@ mbc={}] state: transitioning to Primary Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703334808s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1187.223144531s@ mbc={}] state: transitioning to Primary Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703518867s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.223754883s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703518867s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1187.223754883s@ mbc={}] state: transitioning to Primary Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703692436s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1187.224243164s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:51 localhost ceph-osd[32582]: osd.3 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.703692436s) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1187.224243164s@ mbc={}] state: transitioning to Primary Dec 2 03:06:51 localhost python3[56861]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:06:52 localhost ceph-osd[32582]: osd.3 pg_epoch: 50 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:52 localhost ceph-osd[32582]: osd.3 pg_epoch: 50 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:52 localhost ceph-osd[32582]: osd.3 pg_epoch: 50 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:52 localhost ceph-osd[32582]: osd.3 pg_epoch: 50 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49) [3,5,1] r=0 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:06:53 localhost ansible-async_wrapper.py[57033]: Invoked with 49553956186 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662812.6819944-91863-12133943783798/AnsiballZ_command.py _ Dec 2 03:06:53 localhost ansible-async_wrapper.py[57036]: Starting module and watcher Dec 2 03:06:53 localhost ansible-async_wrapper.py[57036]: Start watching 57037 (3600) Dec 2 03:06:53 localhost ansible-async_wrapper.py[57037]: Start module (57037) Dec 2 03:06:53 localhost ansible-async_wrapper.py[57033]: Return async_wrapper task started. Dec 2 03:06:53 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.17 scrub starts Dec 2 03:06:53 localhost python3[57057]: ansible-ansible.legacy.async_status Invoked with jid=49553956186.57033 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:06:55 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 5.17 scrub ok Dec 2 03:06:55 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.b scrub starts Dec 2 03:06:55 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.b scrub ok Dec 2 03:06:56 localhost puppet-user[57056]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:06:56 localhost puppet-user[57056]: (file: /etc/puppet/hiera.yaml) Dec 2 03:06:56 localhost puppet-user[57056]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:06:56 localhost puppet-user[57056]: (file & line not available) Dec 2 03:06:57 localhost puppet-user[57056]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:06:57 localhost puppet-user[57056]: (file & line not available) Dec 2 03:06:57 localhost puppet-user[57056]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:06:57 localhost puppet-user[57056]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:06:57 localhost puppet-user[57056]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.12 seconds Dec 2 03:06:57 localhost puppet-user[57056]: Notice: Applied catalog in 0.04 seconds Dec 2 03:06:57 localhost puppet-user[57056]: Application: Dec 2 03:06:57 localhost puppet-user[57056]: Initial environment: production Dec 2 03:06:57 localhost puppet-user[57056]: Converged environment: production Dec 2 03:06:57 localhost puppet-user[57056]: Run mode: user Dec 2 03:06:57 localhost puppet-user[57056]: Changes: Dec 2 03:06:57 localhost puppet-user[57056]: Events: Dec 2 03:06:57 localhost puppet-user[57056]: Resources: Dec 2 03:06:57 localhost puppet-user[57056]: Total: 10 Dec 2 03:06:57 localhost puppet-user[57056]: Time: Dec 2 03:06:57 localhost puppet-user[57056]: Schedule: 0.00 Dec 2 03:06:57 localhost puppet-user[57056]: File: 0.00 Dec 2 03:06:57 localhost puppet-user[57056]: Exec: 0.01 Dec 2 03:06:57 localhost puppet-user[57056]: Augeas: 0.01 Dec 2 03:06:57 localhost puppet-user[57056]: Transaction evaluation: 0.03 Dec 2 03:06:57 localhost puppet-user[57056]: Catalog application: 0.04 Dec 2 03:06:57 localhost puppet-user[57056]: Config retrieval: 0.15 Dec 2 03:06:57 localhost puppet-user[57056]: Last run: 1764662817 Dec 2 03:06:57 localhost puppet-user[57056]: Filebucket: 0.00 Dec 2 03:06:57 localhost puppet-user[57056]: Total: 0.05 Dec 2 03:06:57 localhost puppet-user[57056]: Version: Dec 2 03:06:57 localhost puppet-user[57056]: Config: 1764662816 Dec 2 03:06:57 localhost puppet-user[57056]: Puppet: 7.10.0 Dec 2 03:06:57 localhost ansible-async_wrapper.py[57037]: Module complete (57037) Dec 2 03:06:57 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.13 scrub starts Dec 2 03:06:57 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.13 scrub ok Dec 2 03:06:58 localhost ansible-async_wrapper.py[57036]: Done in kid B. Dec 2 03:06:58 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 2.1f scrub starts Dec 2 03:06:58 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 2.1f scrub ok Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.033974648s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1199.529296875s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.025829315s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1199.521118164s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.033367157s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1199.528564453s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.025732040s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1199.521118164s@ mbc={}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.025829315s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1199.521118164s@ mbc={}] state: transitioning to Primary Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.033367157s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1199.528564453s@ mbc={}] state: transitioning to Primary Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.025732040s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1199.521118164s@ mbc={}] state: transitioning to Primary Dec 2 03:06:59 localhost ceph-osd[32582]: osd.3 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.033974648s) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1199.529296875s@ mbc={}] state: transitioning to Primary Dec 2 03:07:00 localhost ceph-osd[32582]: osd.3 pg_epoch: 52 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:00 localhost ceph-osd[32582]: osd.3 pg_epoch: 52 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:00 localhost ceph-osd[32582]: osd.3 pg_epoch: 52 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=51/52 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:00 localhost ceph-osd[32582]: osd.3 pg_epoch: 52 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51) [3,4,2] r=0 lpr=51 pi=[47,51)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:01 localhost ceph-osd[31622]: osd.0 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,1,2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:01 localhost ceph-osd[32582]: osd.3 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.593282700s) [0,1,2] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1195.224121094s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:01 localhost ceph-osd[32582]: osd.3 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.592213631s) [0,1,2] r=-1 lpr=53 pi=[45,53)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1195.223144531s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:01 localhost ceph-osd[32582]: osd.3 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.593200684s) [0,1,2] r=-1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1195.224121094s@ mbc={}] state: transitioning to Stray Dec 2 03:07:01 localhost ceph-osd[32582]: osd.3 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.592172623s) [0,1,2] r=-1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1195.223144531s@ mbc={}] state: transitioning to Stray Dec 2 03:07:01 localhost ceph-osd[31622]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,1,2] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:02 localhost ceph-osd[31622]: osd.0 pg_epoch: 54 pg[7.4( v 42'39 lc 42'15 (0'0,42'39] local-lis/les=53/54 n=4 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,1,2] r=0 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(1+2)=4}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:02 localhost ceph-osd[31622]: osd.0 pg_epoch: 54 pg[7.c( v 42'39 lc 42'17 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53) [0,1,2] r=0 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:03 localhost python3[57312]: ansible-ansible.legacy.async_status Invoked with jid=49553956186.57033 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:07:04 localhost python3[57328]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:07:04 localhost python3[57344]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:07:05 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.1f scrub starts Dec 2 03:07:05 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.1f scrub ok Dec 2 03:07:05 localhost python3[57394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:05 localhost python3[57412]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpqxu7g064 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:07:06 localhost python3[57442]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:07 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.c scrub starts Dec 2 03:07:07 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.c scrub ok Dec 2 03:07:07 localhost python3[57546]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:07:08 localhost python3[57565]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:08 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts Dec 2 03:07:08 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok Dec 2 03:07:09 localhost python3[57597]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:07:09 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.6 scrub starts Dec 2 03:07:09 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.6 scrub ok Dec 2 03:07:09 localhost ceph-osd[32582]: osd.3 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.927244186s) [2,0,4] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1207.528686523s@ mbc={}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:09 localhost ceph-osd[32582]: osd.3 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.933298111s) [2,0,4] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1207.534790039s@ mbc={}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:09 localhost ceph-osd[32582]: osd.3 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.927161217s) [2,0,4] r=-1 lpr=55 pi=[47,55)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.528686523s@ mbc={}] state: transitioning to Stray Dec 2 03:07:09 localhost ceph-osd[32582]: osd.3 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.933200836s) [2,0,4] r=-1 lpr=55 pi=[47,55)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.534790039s@ mbc={}] state: transitioning to Stray Dec 2 03:07:09 localhost python3[57647]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:10 localhost python3[57665]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:10 localhost python3[57727]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4224 writes, 19K keys, 4224 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4224 writes, 344 syncs, 12.28 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 965 writes, 3586 keys, 965 commit groups, 1.0 writes per commit group, ingest: 1.55 MB, 0.00 MB/s#012Interval WAL: 965 writes, 199 syncs, 4.85 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 2 03:07:10 localhost ceph-osd[31622]: osd.0 pg_epoch: 55 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55) [2,0,4] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:10 localhost ceph-osd[31622]: osd.0 pg_epoch: 55 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55) [2,0,4] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:10 localhost python3[57745]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:11 localhost python3[57807]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:11 localhost python3[57825]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:11 localhost ceph-osd[31622]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:11 localhost ceph-osd[31622]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:11 localhost ceph-osd[32582]: osd.3 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.969664574s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1209.587890625s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:11 localhost ceph-osd[32582]: osd.3 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.969498634s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1209.587890625s@ mbc={}] state: transitioning to Stray Dec 2 03:07:11 localhost ceph-osd[32582]: osd.3 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.969781876s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1209.589599609s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:11 localhost ceph-osd[32582]: osd.3 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.969389915s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1209.589599609s@ mbc={}] state: transitioning to Stray Dec 2 03:07:12 localhost python3[57887]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:12 localhost python3[57905]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:07:12 localhost ceph-osd[31622]: osd.0 pg_epoch: 58 pg[7.e( v 42'39 lc 42'19 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=0 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:12 localhost ceph-osd[31622]: osd.0 pg_epoch: 58 pg[7.6( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=57/58 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=0 lpr=57 pi=[49,57)/1 crt=42'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:12 localhost systemd[1]: tmp-crun.Sb6vs7.mount: Deactivated successfully. Dec 2 03:07:12 localhost podman[57936]: 2025-12-02 08:07:12.807536816 +0000 UTC m=+0.100076687 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:07:12 localhost python3[57935]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:07:12 localhost systemd[1]: Reloading. Dec 2 03:07:13 localhost podman[57936]: 2025-12-02 08:07:13.002936109 +0000 UTC m=+0.295475940 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:07:13 localhost systemd-rc-local-generator[57993]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:07:13 localhost systemd-sysv-generator[57997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:07:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:07:13 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:07:13 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.4 scrub starts Dec 2 03:07:13 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.4 scrub ok Dec 2 03:07:13 localhost python3[58051]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:14 localhost python3[58069]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:14 localhost python3[58131]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:07:14 localhost python3[58149]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 4992 writes, 22K keys, 4992 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4992 writes, 517 syncs, 9.66 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1604 writes, 5649 keys, 1604 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s#012Interval WAL: 1604 writes, 319 syncs, 5.03 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me Dec 2 03:07:15 localhost python3[58179]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:07:15 localhost systemd[1]: Reloading. Dec 2 03:07:15 localhost systemd-sysv-generator[58208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:07:15 localhost systemd-rc-local-generator[58204]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:07:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:07:15 localhost systemd[1]: Starting Create netns directory... Dec 2 03:07:15 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:07:15 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:07:15 localhost systemd[1]: Finished Create netns directory. Dec 2 03:07:16 localhost python3[58237]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:07:16 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.10 scrub starts Dec 2 03:07:16 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.10 scrub ok Dec 2 03:07:17 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.11 scrub starts Dec 2 03:07:17 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.11 scrub ok Dec 2 03:07:17 localhost python3[58295]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:07:18 localhost podman[58373]: 2025-12-02 08:07:18.30208335 +0000 UTC m=+0.102201665 container create b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step2, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 2 03:07:18 localhost podman[58380]: 2025-12-02 08:07:18.319773782 +0000 UTC m=+0.100952531 container create c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, config_id=tripleo_step2, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, container_name=nova_virtqemud_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:07:18 localhost podman[58373]: 2025-12-02 08:07:18.247229116 +0000 UTC m=+0.047347461 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:07:18 localhost systemd[1]: Started libpod-conmon-b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8.scope. Dec 2 03:07:18 localhost systemd[1]: Started libpod-conmon-c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1.scope. Dec 2 03:07:18 localhost podman[58380]: 2025-12-02 08:07:18.256365584 +0000 UTC m=+0.037544343 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:07:18 localhost systemd[1]: Started libcrun container. Dec 2 03:07:18 localhost systemd[1]: Started libcrun container. Dec 2 03:07:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae00fa9a5dde399a6e7e6528b55d78146560e04771ac7245c19ea55518318121/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/977574eb305bf7cb5c1e2fdd973144cbfe7638acd584e0968bf15c31ee49846c/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:18 localhost podman[58373]: 2025-12-02 08:07:18.39387645 +0000 UTC m=+0.193994735 container init b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, config_id=tripleo_step2, build-date=2025-11-19T00:36:58Z, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:07:18 localhost podman[58373]: 2025-12-02 08:07:18.405766525 +0000 UTC m=+0.205884810 container start b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_compute_init_log, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:07:18 localhost python3[58295]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Dec 2 03:07:18 localhost systemd[1]: libpod-b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[58380]: 2025-12-02 08:07:18.443536903 +0000 UTC m=+0.224715622 container init c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:07:18 localhost podman[58380]: 2025-12-02 08:07:18.455305704 +0000 UTC m=+0.236484423 container start c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, config_id=tripleo_step2, build-date=2025-11-19T00:35:22Z, release=1761123044) Dec 2 03:07:18 localhost python3[58295]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Dec 2 03:07:18 localhost systemd[1]: libpod-c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[58411]: 2025-12-02 08:07:18.474362473 +0000 UTC m=+0.052377368 container died b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:07:18 localhost podman[58435]: 2025-12-02 08:07:18.519441581 +0000 UTC m=+0.044010340 container died c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step2, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_virtqemud_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Dec 2 03:07:18 localhost podman[58435]: 2025-12-02 08:07:18.54987128 +0000 UTC m=+0.074440019 container cleanup c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=nova_virtqemud_init_logs, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:07:18 localhost systemd[1]: libpod-conmon-c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[58412]: 2025-12-02 08:07:18.568172659 +0000 UTC m=+0.138629608 container cleanup b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute_init_log, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:07:18 localhost systemd[1]: libpod-conmon-b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8.scope: Deactivated successfully. Dec 2 03:07:18 localhost podman[58561]: 2025-12-02 08:07:18.977793447 +0000 UTC m=+0.083502516 container create 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:07:19 localhost podman[58562]: 2025-12-02 08:07:19.009288685 +0000 UTC m=+0.105834825 container create db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step2) Dec 2 03:07:19 localhost systemd[1]: Started libpod-conmon-6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f.scope. Dec 2 03:07:19 localhost podman[58561]: 2025-12-02 08:07:18.929418239 +0000 UTC m=+0.035127338 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:07:19 localhost systemd[1]: Started libcrun container. Dec 2 03:07:19 localhost systemd[1]: Started libpod-conmon-db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d.scope. Dec 2 03:07:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55b3f6a1664d44c5b07239ef1b7cbc40e1e222dc48149b5fd43766f0362bb85/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:19 localhost systemd[1]: Started libcrun container. Dec 2 03:07:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368ab18e82291a20f6c3548bd942730bbcde8a3da90171ac2014bcabec91a7fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:07:19 localhost podman[58561]: 2025-12-02 08:07:19.046492438 +0000 UTC m=+0.152201507 container init 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, distribution-scope=public, release=1761123044) Dec 2 03:07:19 localhost podman[58562]: 2025-12-02 08:07:18.94965207 +0000 UTC m=+0.046198260 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:07:19 localhost podman[58562]: 2025-12-02 08:07:19.051571736 +0000 UTC m=+0.148117866 container init db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, container_name=create_haproxy_wrapper, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step2, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:07:19 localhost podman[58561]: 2025-12-02 08:07:19.058845455 +0000 UTC m=+0.164554524 container start 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step2, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, batch=17.1_20251118.1) Dec 2 03:07:19 localhost podman[58561]: 2025-12-02 08:07:19.059387489 +0000 UTC m=+0.165096568 container attach 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=create_virtlogd_wrapper, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true) Dec 2 03:07:19 localhost podman[58562]: 2025-12-02 08:07:19.06053027 +0000 UTC m=+0.157076370 container start db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step2, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}) Dec 2 03:07:19 localhost podman[58562]: 2025-12-02 08:07:19.060789717 +0000 UTC m=+0.157335897 container attach db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_id=tripleo_step2) Dec 2 03:07:19 localhost systemd[1]: var-lib-containers-storage-overlay-ae00fa9a5dde399a6e7e6528b55d78146560e04771ac7245c19ea55518318121-merged.mount: Deactivated successfully. Dec 2 03:07:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4991e2f61b871b06b61f9e6a52ca78b603ef5fc80d3f5d4703835ea958583b1-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:19 localhost systemd[1]: var-lib-containers-storage-overlay-977574eb305bf7cb5c1e2fdd973144cbfe7638acd584e0968bf15c31ee49846c-merged.mount: Deactivated successfully. Dec 2 03:07:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8fad67557c944893e6ba5b70593fb712d6466a579fce129429ec2894b1f6ad8-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:19 localhost ceph-osd[32582]: osd.3 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.804286003s) [1,5,3] r=2 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1217.578247070s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:19 localhost ceph-osd[32582]: osd.3 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.804210663s) [1,5,3] r=2 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1217.578247070s@ mbc={}] state: transitioning to Stray Dec 2 03:07:19 localhost ceph-osd[32582]: osd.3 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.800751686s) [1,5,3] r=2 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1217.574951172s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:19 localhost ceph-osd[32582]: osd.3 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.800600052s) [1,5,3] r=2 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1217.574951172s@ mbc={}] state: transitioning to Stray Dec 2 03:07:20 localhost ovs-vsctl[58678]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 2 03:07:21 localhost systemd[1]: libpod-6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f.scope: Deactivated successfully. Dec 2 03:07:21 localhost systemd[1]: libpod-6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f.scope: Consumed 2.096s CPU time. Dec 2 03:07:21 localhost podman[58561]: 2025-12-02 08:07:21.152087285 +0000 UTC m=+2.257796334 container died 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, architecture=x86_64, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044) Dec 2 03:07:21 localhost systemd[1]: tmp-crun.n28ABt.mount: Deactivated successfully. Dec 2 03:07:21 localhost podman[58815]: 2025-12-02 08:07:21.248135691 +0000 UTC m=+0.083271079 container cleanup 6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:07:21 localhost systemd[1]: libpod-conmon-6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f.scope: Deactivated successfully. Dec 2 03:07:21 localhost python3[58295]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Dec 2 03:07:21 localhost ceph-osd[32582]: osd.3 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.415328026s) [3,4,5] r=0 lpr=61 pi=[45,61)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1219.225830078s@ mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:21 localhost ceph-osd[32582]: osd.3 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.415328026s) [3,4,5] r=0 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1219.225830078s@ mbc={}] state: transitioning to Primary Dec 2 03:07:22 localhost systemd[1]: libpod-db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d.scope: Deactivated successfully. Dec 2 03:07:22 localhost systemd[1]: libpod-db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d.scope: Consumed 2.067s CPU time. Dec 2 03:07:22 localhost podman[58562]: 2025-12-02 08:07:22.016707658 +0000 UTC m=+3.113253768 container died db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_haproxy_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, config_id=tripleo_step2, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:07:22 localhost podman[58853]: 2025-12-02 08:07:22.082186991 +0000 UTC m=+0.055389000 container cleanup db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, container_name=create_haproxy_wrapper, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step2, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:07:22 localhost systemd[1]: libpod-conmon-db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d.scope: Deactivated successfully. Dec 2 03:07:22 localhost python3[58295]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Dec 2 03:07:22 localhost systemd[1]: var-lib-containers-storage-overlay-368ab18e82291a20f6c3548bd942730bbcde8a3da90171ac2014bcabec91a7fe-merged.mount: Deactivated successfully. Dec 2 03:07:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db624006aefb080ca6821323538acdc984433e40a65b9a8202590ed74cf3036d-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:22 localhost systemd[1]: var-lib-containers-storage-overlay-a55b3f6a1664d44c5b07239ef1b7cbc40e1e222dc48149b5fd43766f0362bb85-merged.mount: Deactivated successfully. Dec 2 03:07:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6dbd36b1217b2142dd84dc17ea35f1612e9d8de121805c102bfb7395aec4018f-userdata-shm.mount: Deactivated successfully. Dec 2 03:07:22 localhost python3[58908]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:22 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.18 scrub starts Dec 2 03:07:22 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.18 scrub ok Dec 2 03:07:22 localhost ceph-osd[32582]: osd.3 pg_epoch: 62 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=61/62 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61) [3,4,5] r=0 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:23 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.9 scrub starts Dec 2 03:07:23 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.9 scrub ok Dec 2 03:07:24 localhost python3[59029]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005541913 step=2 update_config_hash_only=False Dec 2 03:07:24 localhost python3[59045]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:07:25 localhost python3[59061]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:07:26 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.16 scrub starts Dec 2 03:07:26 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 6.16 scrub ok Dec 2 03:07:29 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts Dec 2 03:07:29 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok Dec 2 03:07:29 localhost ceph-osd[31622]: osd.0 pg_epoch: 63 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63) [0,2,4] r=0 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:29 localhost ceph-osd[32582]: osd.3 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.924983978s) [0,2,4] r=-1 lpr=63 pi=[47,63)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1223.529296875s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:29 localhost ceph-osd[32582]: osd.3 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.924929619s) [0,2,4] r=-1 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1223.529296875s@ mbc={}] state: transitioning to Stray Dec 2 03:07:30 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.6 scrub starts Dec 2 03:07:31 localhost ceph-osd[31622]: osd.0 pg_epoch: 64 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=63/64 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63) [0,2,4] r=0 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 2 03:07:31 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.6 scrub ok Dec 2 03:07:31 localhost ceph-osd[32582]: osd.3 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.800328255s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1225.584350586s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:31 localhost ceph-osd[32582]: osd.3 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.799696922s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1225.584350586s@ mbc={}] state: transitioning to Stray Dec 2 03:07:32 localhost ceph-osd[31622]: osd.0 pg_epoch: 65 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65) [2,0,4] r=1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:34 localhost ceph-osd[32582]: osd.3 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.453248978s) [3,1,2] r=0 lpr=67 pi=[51,67)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1233.576293945s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:34 localhost ceph-osd[32582]: osd.3 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.453248978s) [3,1,2] r=0 lpr=67 pi=[51,67)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1233.576293945s@ mbc={}] state: transitioning to Primary Dec 2 03:07:34 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.14 scrub starts Dec 2 03:07:34 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.14 scrub ok Dec 2 03:07:35 localhost ceph-osd[31622]: osd.0 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.064795494s) [1,3,2] r=-1 lpr=68 pi=[53,68)/1 crt=42'39 mlcod 0'0 active pruub 1240.224365234s@ mbc={255={}}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:35 localhost ceph-osd[31622]: osd.0 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.064718246s) [1,3,2] r=-1 lpr=68 pi=[53,68)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1240.224365234s@ mbc={}] state: transitioning to Stray Dec 2 03:07:35 localhost ceph-osd[32582]: osd.3 pg_epoch: 68 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=67/68 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67) [3,1,2] r=0 lpr=67 pi=[51,67)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:35 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.e scrub starts Dec 2 03:07:35 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.e scrub ok Dec 2 03:07:37 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.f scrub starts Dec 2 03:07:37 localhost ceph-osd[32582]: osd.3 pg_epoch: 68 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68) [1,3,2] r=1 lpr=68 pi=[53,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:37 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 6.f scrub ok Dec 2 03:07:38 localhost ceph-osd[31622]: osd.0 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.514230728s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1240.024169922s@ mbc={}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:38 localhost ceph-osd[31622]: osd.0 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.514141083s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1240.024169922s@ mbc={}] state: transitioning to Stray Dec 2 03:07:38 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.2 scrub starts Dec 2 03:07:38 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.2 scrub ok Dec 2 03:07:39 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts Dec 2 03:07:39 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok Dec 2 03:07:39 localhost ceph-osd[32582]: osd.3 pg_epoch: 70 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70) [1,3,5] r=1 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 2 03:07:39 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.9 scrub starts Dec 2 03:07:40 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 7.9 scrub ok Dec 2 03:07:40 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.6 scrub starts Dec 2 03:07:40 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.6 scrub ok Dec 2 03:07:41 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts Dec 2 03:07:41 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok Dec 2 03:07:42 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.8 scrub starts Dec 2 03:07:42 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.8 scrub ok Dec 2 03:07:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:07:43 localhost systemd[1]: tmp-crun.kK7BIJ.mount: Deactivated successfully. Dec 2 03:07:43 localhost podman[59062]: 2025-12-02 08:07:43.445771032 +0000 UTC m=+0.086055246 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, release=1761123044) Dec 2 03:07:43 localhost podman[59062]: 2025-12-02 08:07:43.654478087 +0000 UTC m=+0.294762341 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1) Dec 2 03:07:43 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:07:43 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.b scrub starts Dec 2 03:07:43 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 7.b scrub ok Dec 2 03:07:44 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.3 scrub starts Dec 2 03:07:44 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 5.3 scrub ok Dec 2 03:07:45 localhost ceph-osd[31622]: osd.0 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.045168877s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 crt=42'39 mlcod 0'0 active pruub 1250.033569336s@ mbc={255={}}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:45 localhost ceph-osd[31622]: osd.0 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.045111656s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1250.033569336s@ mbc={}] state: transitioning to Stray Dec 2 03:07:45 localhost ceph-osd[32582]: osd.3 pg_epoch: 72 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72) [3,5,1] r=0 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:45 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.11 scrub starts Dec 2 03:07:45 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.11 scrub ok Dec 2 03:07:46 localhost ceph-osd[32582]: osd.3 pg_epoch: 73 pg[7.e( v 42'39 lc 42'19 (0'0,42'39] local-lis/les=72/73 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72) [3,5,1] r=0 lpr=72 pi=[57,72)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:47 localhost ceph-osd[31622]: osd.0 pg_epoch: 74 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74) [0,5,1] r=0 lpr=74 pi=[59,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 2 03:07:47 localhost ceph-osd[32582]: osd.3 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.090670586s) [0,5,1] r=-1 lpr=74 pi=[59,74)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1245.798461914s@ mbc={}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 2 03:07:47 localhost ceph-osd[32582]: osd.3 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.090611458s) [0,5,1] r=-1 lpr=74 pi=[59,74)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1245.798461914s@ mbc={}] state: transitioning to Stray Dec 2 03:07:48 localhost ceph-osd[31622]: osd.0 pg_epoch: 75 pg[7.f( v 42'39 lc 42'1 (0'0,42'39] local-lis/les=74/75 n=3 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74) [0,5,1] r=0 lpr=74 pi=[59,74)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(2+1)=3}}] state: react AllReplicasActivated Activating complete Dec 2 03:07:48 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.b scrub starts Dec 2 03:07:49 localhost ceph-osd[31622]: log_channel(cluster) log [DBG] : 4.b scrub ok Dec 2 03:07:51 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.10 scrub starts Dec 2 03:07:51 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 4.10 scrub ok Dec 2 03:07:53 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.6 scrub starts Dec 2 03:07:53 localhost ceph-osd[32582]: log_channel(cluster) log [DBG] : 2.6 scrub ok Dec 2 03:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:08:14 localhost podman[59167]: 2025-12-02 08:08:14.439423449 +0000 UTC m=+0.079042249 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 2 03:08:14 localhost podman[59167]: 2025-12-02 08:08:14.632030326 +0000 UTC m=+0.271649126 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 2 03:08:14 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:08:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:08:45 localhost podman[59196]: 2025-12-02 08:08:45.430707299 +0000 UTC m=+0.071483709 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public) Dec 2 03:08:45 localhost podman[59196]: 2025-12-02 08:08:45.623268275 +0000 UTC m=+0.264044615 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:08:45 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:09:03 localhost podman[59327]: 2025-12-02 08:09:03.492783997 +0000 UTC m=+0.067441813 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Dec 2 03:09:03 localhost podman[59327]: 2025-12-02 08:09:03.617326657 +0000 UTC m=+0.191984493 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 2 03:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:09:16 localhost podman[59470]: 2025-12-02 08:09:16.452390553 +0000 UTC m=+0.092147295 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1) Dec 2 03:09:16 localhost podman[59470]: 2025-12-02 08:09:16.643070909 +0000 UTC m=+0.282827691 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:09:16 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:09:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:09:47 localhost podman[59500]: 2025-12-02 08:09:47.503103093 +0000 UTC m=+0.143048590 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1) Dec 2 03:09:47 localhost podman[59500]: 2025-12-02 08:09:47.732160483 +0000 UTC m=+0.372106070 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Dec 2 03:09:47 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:10:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:10:18 localhost systemd[1]: tmp-crun.a3MtZ2.mount: Deactivated successfully. Dec 2 03:10:18 localhost podman[59605]: 2025-12-02 08:10:18.470552416 +0000 UTC m=+0.113750166 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Dec 2 03:10:18 localhost podman[59605]: 2025-12-02 08:10:18.645803764 +0000 UTC m=+0.289001504 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 2 03:10:18 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:10:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:10:49 localhost systemd[1]: tmp-crun.afRSZs.mount: Deactivated successfully. Dec 2 03:10:49 localhost podman[59634]: 2025-12-02 08:10:49.445908332 +0000 UTC m=+0.090137135 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:10:49 localhost podman[59634]: 2025-12-02 08:10:49.63887684 +0000 UTC m=+0.283105573 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:10:49 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:11:20 localhost podman[59740]: 2025-12-02 08:11:20.419313815 +0000 UTC m=+0.057869595 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:11:20 localhost podman[59740]: 2025-12-02 08:11:20.619973395 +0000 UTC m=+0.258529195 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:11:20 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:11:51 localhost podman[59769]: 2025-12-02 08:11:51.434642306 +0000 UTC m=+0.075482761 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12) Dec 2 03:11:51 localhost podman[59769]: 2025-12-02 08:11:51.629001852 +0000 UTC m=+0.269842307 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:11:51 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:12:10 localhost python3[59922]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:11 localhost python3[59967]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663130.4798465-98260-93706956727259/source _original_basename=tmpfp834ra9 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:12 localhost python3[59997]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:14 localhost ansible-async_wrapper.py[60169]: Invoked with 537411591793 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663133.609962-98468-74268929960886/AnsiballZ_command.py _ Dec 2 03:12:14 localhost ansible-async_wrapper.py[60172]: Starting module and watcher Dec 2 03:12:14 localhost ansible-async_wrapper.py[60172]: Start watching 60173 (3600) Dec 2 03:12:14 localhost ansible-async_wrapper.py[60173]: Start module (60173) Dec 2 03:12:14 localhost ansible-async_wrapper.py[60169]: Return async_wrapper task started. Dec 2 03:12:14 localhost python3[60193]: ansible-ansible.legacy.async_status Invoked with jid=537411591793.60169 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:12:17 localhost puppet-user[60182]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:12:17 localhost puppet-user[60182]: (file: /etc/puppet/hiera.yaml) Dec 2 03:12:17 localhost puppet-user[60182]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:12:17 localhost puppet-user[60182]: (file & line not available) Dec 2 03:12:17 localhost puppet-user[60182]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:12:17 localhost puppet-user[60182]: (file & line not available) Dec 2 03:12:17 localhost puppet-user[60182]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:12:18 localhost puppet-user[60182]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:12:18 localhost puppet-user[60182]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.11 seconds Dec 2 03:12:18 localhost puppet-user[60182]: Notice: Applied catalog in 0.04 seconds Dec 2 03:12:18 localhost puppet-user[60182]: Application: Dec 2 03:12:18 localhost puppet-user[60182]: Initial environment: production Dec 2 03:12:18 localhost puppet-user[60182]: Converged environment: production Dec 2 03:12:18 localhost puppet-user[60182]: Run mode: user Dec 2 03:12:18 localhost puppet-user[60182]: Changes: Dec 2 03:12:18 localhost puppet-user[60182]: Events: Dec 2 03:12:18 localhost puppet-user[60182]: Resources: Dec 2 03:12:18 localhost puppet-user[60182]: Total: 10 Dec 2 03:12:18 localhost puppet-user[60182]: Time: Dec 2 03:12:18 localhost puppet-user[60182]: Schedule: 0.00 Dec 2 03:12:18 localhost puppet-user[60182]: File: 0.00 Dec 2 03:12:18 localhost puppet-user[60182]: Exec: 0.01 Dec 2 03:12:18 localhost puppet-user[60182]: Augeas: 0.01 Dec 2 03:12:18 localhost puppet-user[60182]: Transaction evaluation: 0.03 Dec 2 03:12:18 localhost puppet-user[60182]: Catalog application: 0.04 Dec 2 03:12:18 localhost puppet-user[60182]: Config retrieval: 0.15 Dec 2 03:12:18 localhost puppet-user[60182]: Last run: 1764663138 Dec 2 03:12:18 localhost puppet-user[60182]: Filebucket: 0.00 Dec 2 03:12:18 localhost puppet-user[60182]: Total: 0.05 Dec 2 03:12:18 localhost puppet-user[60182]: Version: Dec 2 03:12:18 localhost puppet-user[60182]: Config: 1764663137 Dec 2 03:12:18 localhost puppet-user[60182]: Puppet: 7.10.0 Dec 2 03:12:18 localhost ansible-async_wrapper.py[60173]: Module complete (60173) Dec 2 03:12:19 localhost ansible-async_wrapper.py[60172]: Done in kid B. Dec 2 03:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:12:22 localhost systemd[1]: tmp-crun.3ZaEcr.mount: Deactivated successfully. Dec 2 03:12:22 localhost podman[60305]: 2025-12-02 08:12:22.445312778 +0000 UTC m=+0.086831224 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, version=17.1.12, release=1761123044, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:12:22 localhost podman[60305]: 2025-12-02 08:12:22.635830709 +0000 UTC m=+0.277349185 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:12:22 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:12:24 localhost python3[60350]: ansible-ansible.legacy.async_status Invoked with jid=537411591793.60169 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:12:25 localhost python3[60366]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:12:26 localhost python3[60382]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:26 localhost python3[60432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:26 localhost python3[60450]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpijj274xa recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:12:27 localhost python3[60480]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:28 localhost python3[60583]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:12:29 localhost python3[60602]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:30 localhost python3[60634]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:31 localhost python3[60684]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:31 localhost python3[60702]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:32 localhost python3[60764]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:32 localhost python3[60782]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:33 localhost python3[60844]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:33 localhost python3[60862]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:34 localhost python3[60924]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:34 localhost python3[60942]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:34 localhost python3[60972]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:34 localhost systemd[1]: Reloading. Dec 2 03:12:34 localhost systemd-sysv-generator[61003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:34 localhost systemd-rc-local-generator[61000]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:35 localhost python3[61058]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:36 localhost python3[61076]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:36 localhost python3[61138]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:12:36 localhost python3[61156]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:37 localhost python3[61186]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:37 localhost systemd[1]: Reloading. Dec 2 03:12:37 localhost systemd-sysv-generator[61214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:37 localhost systemd-rc-local-generator[61209]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:37 localhost systemd[1]: Starting Create netns directory... Dec 2 03:12:37 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:12:37 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:12:37 localhost systemd[1]: Finished Create netns directory. Dec 2 03:12:38 localhost python3[61243]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:12:39 localhost python3[61299]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:12:40 localhost podman[61461]: 2025-12-02 08:12:40.123170632 +0000 UTC m=+0.058326069 container create eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:12:45Z, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.buildah.version=1.41.4) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17.scope. Dec 2 03:12:40 localhost podman[61474]: 2025-12-02 08:12:40.159871203 +0000 UTC m=+0.085553079 container create 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:12:40 localhost podman[61468]: 2025-12-02 08:12:40.176305156 +0000 UTC m=+0.104514371 container create 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}) Dec 2 03:12:40 localhost podman[61462]: 2025-12-02 08:12:40.187938587 +0000 UTC m=+0.121420427 container create a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.scope. Dec 2 03:12:40 localhost podman[61462]: 2025-12-02 08:12:40.09187162 +0000 UTC m=+0.025353490 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:12:40 localhost podman[61461]: 2025-12-02 08:12:40.093935466 +0000 UTC m=+0.029090913 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46603caa88f65e015c74097f596e48b006fc6fd2b23d7cf444ca3fcae1abca86/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56.scope. Dec 2 03:12:40 localhost podman[61461]: 2025-12-02 08:12:40.202090197 +0000 UTC m=+0.137245634 container init eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082042a751b48593af3e4b42b09156dbc115dd133d7891319f3ff1ad0b672b0b/merged/scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082042a751b48593af3e4b42b09156dbc115dd133d7891319f3ff1ad0b672b0b/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61461]: 2025-12-02 08:12:40.210320204 +0000 UTC m=+0.145475651 container start eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4) Dec 2 03:12:40 localhost systemd[1]: libpod-eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17.scope: Deactivated successfully. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61474]: 2025-12-02 08:12:40.116436536 +0000 UTC m=+0.042118442 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:12:40 localhost podman[61468]: 2025-12-02 08:12:40.116896919 +0000 UTC m=+0.045106124 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope. Dec 2 03:12:40 localhost podman[61545]: 2025-12-02 08:12:40.247982501 +0000 UTC m=+0.029861303 container died eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:12:40 localhost podman[61474]: 2025-12-02 08:12:40.256927438 +0000 UTC m=+0.182609324 container init 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 2 03:12:40 localhost podman[61468]: 2025-12-02 08:12:40.269230007 +0000 UTC m=+0.197439212 container init 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_virtlogd_wrapper, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 03:12:40 localhost podman[61468]: 2025-12-02 08:12:40.278113792 +0000 UTC m=+0.206322997 container start 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:40 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:12:40 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:40 localhost podman[61474]: 2025-12-02 08:12:40.292253811 +0000 UTC m=+0.217935697 container start 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, container_name=collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:12:40 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4767aaabc3de112d8791c290aa2b669d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 2 03:12:40 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:12:40 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:12:40 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:40 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:12:40 localhost podman[61462]: 2025-12-02 08:12:40.316728756 +0000 UTC m=+0.250210596 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:12:40 localhost podman[61545]: 2025-12-02 08:12:40.320858729 +0000 UTC m=+0.102737511 container cleanup eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, container_name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:12:45Z, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:12:40 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:12:40 localhost systemd[1]: libpod-conmon-eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17.scope: Deactivated successfully. Dec 2 03:12:40 localhost podman[61462]: 2025-12-02 08:12:40.332225243 +0000 UTC m=+0.265707083 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:12:40 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c70cec5d3310de4d4589e1a95c8fd3c --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.356689467 +0000 UTC m=+0.231721857 container create 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97.scope. Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.316667714 +0000 UTC m=+0.191700104 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422d62b3b9907c649268e279099615c7aa0520fd45eabb2e450a911bab63aaa2/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422d62b3b9907c649268e279099615c7aa0520fd45eabb2e450a911bab63aaa2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422d62b3b9907c649268e279099615c7aa0520fd45eabb2e450a911bab63aaa2/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.425485713 +0000 UTC m=+0.300518083 container init 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, container_name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute) Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.434549243 +0000 UTC m=+0.309581613 container start 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.434905623 +0000 UTC m=+0.309938023 container attach 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:12:40 localhost podman[61583]: 2025-12-02 08:12:40.439592842 +0000 UTC m=+0.137047718 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 2 03:12:40 localhost podman[61670]: 2025-12-02 08:12:40.459261504 +0000 UTC m=+0.037875345 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:12:40 localhost systemd[61605]: Queued start job for default target Main User Target. Dec 2 03:12:40 localhost systemd[61605]: Created slice User Application Slice. Dec 2 03:12:40 localhost systemd[61605]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:12:40 localhost systemd[61605]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:12:40 localhost systemd[61605]: Reached target Paths. Dec 2 03:12:40 localhost systemd[61605]: Reached target Timers. Dec 2 03:12:40 localhost systemd[61605]: Starting D-Bus User Message Bus Socket... Dec 2 03:12:40 localhost systemd[61605]: Starting Create User's Volatile Files and Directories... Dec 2 03:12:40 localhost systemd[61605]: Finished Create User's Volatile Files and Directories. Dec 2 03:12:40 localhost systemd[1]: libpod-99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97.scope: Deactivated successfully. Dec 2 03:12:40 localhost systemd[61605]: Listening on D-Bus User Message Bus Socket. Dec 2 03:12:40 localhost systemd[61605]: Reached target Sockets. Dec 2 03:12:40 localhost systemd[61605]: Reached target Basic System. Dec 2 03:12:40 localhost systemd[61605]: Reached target Main User Target. Dec 2 03:12:40 localhost systemd[61605]: Startup finished in 123ms. Dec 2 03:12:40 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:12:40 localhost podman[61670]: 2025-12-02 08:12:40.48633758 +0000 UTC m=+0.064951401 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=) Dec 2 03:12:40 localhost systemd[1]: Started Session c1 of User root. Dec 2 03:12:40 localhost systemd[1]: Started Session c2 of User root. Dec 2 03:12:40 localhost systemd[1]: libpod-conmon-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:12:40 localhost podman[61513]: 2025-12-02 08:12:40.535277099 +0000 UTC m=+0.410309499 container died 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_statedir_owner, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git) Dec 2 03:12:40 localhost systemd[1]: session-c1.scope: Deactivated successfully. Dec 2 03:12:40 localhost podman[61708]: 2025-12-02 08:12:40.556286487 +0000 UTC m=+0.056440356 container cleanup 99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:12:40 localhost systemd[1]: libpod-conmon-99492b9d730d6f61fe3a1a9619bd46efae9946c0fd0973cddb67893b4ee48d97.scope: Deactivated successfully. Dec 2 03:12:40 localhost systemd[1]: session-c2.scope: Deactivated successfully. Dec 2 03:12:40 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Dec 2 03:12:40 localhost podman[61583]: 2025-12-02 08:12:40.62532468 +0000 UTC m=+0.322779586 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Dec 2 03:12:40 localhost podman[61583]: unhealthy Dec 2 03:12:40 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:12:40 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Failed with result 'exit-code'. Dec 2 03:12:40 localhost podman[61843]: 2025-12-02 08:12:40.899710262 +0000 UTC m=+0.056032355 container create 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:40 localhost systemd[1]: Started libpod-conmon-9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1.scope. Dec 2 03:12:40 localhost systemd[1]: Started libcrun container. Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ad9083e7cc3e7616303b5d13e7a101d6cbdaa325d96e32c757f24ef781f581/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ad9083e7cc3e7616303b5d13e7a101d6cbdaa325d96e32c757f24ef781f581/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ad9083e7cc3e7616303b5d13e7a101d6cbdaa325d96e32c757f24ef781f581/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ad9083e7cc3e7616303b5d13e7a101d6cbdaa325d96e32c757f24ef781f581/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:40 localhost podman[61843]: 2025-12-02 08:12:40.969018993 +0000 UTC m=+0.125341176 container init 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:40 localhost podman[61843]: 2025-12-02 08:12:40.873313175 +0000 UTC m=+0.029635268 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:40 localhost podman[61843]: 2025-12-02 08:12:40.975747858 +0000 UTC m=+0.132069941 container start 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:35:22Z, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container) Dec 2 03:12:40 localhost podman[61865]: 2025-12-02 08:12:40.983161582 +0000 UTC m=+0.073783435 container create d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtsecretd, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc.) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f.scope. Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[61865]: 2025-12-02 08:12:41.041419328 +0000 UTC m=+0.132041201 container init d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, architecture=x86_64, container_name=nova_virtsecretd) Dec 2 03:12:41 localhost podman[61865]: 2025-12-02 08:12:40.944681412 +0000 UTC m=+0.035303305 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost podman[61865]: 2025-12-02 08:12:41.049361716 +0000 UTC m=+0.139983599 container start d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, container_name=nova_virtsecretd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:41 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:41 localhost systemd[1]: Started Session c3 of User root. Dec 2 03:12:41 localhost systemd[1]: var-lib-containers-storage-overlay-46603caa88f65e015c74097f596e48b006fc6fd2b23d7cf444ca3fcae1abca86-merged.mount: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eeb97483e6e3a84a709a47bb762665bd54b701b239f0abc4e5f02f2760c5dd17-userdata-shm.mount: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: session-c3.scope: Deactivated successfully. Dec 2 03:12:41 localhost podman[62014]: 2025-12-02 08:12:41.480676723 +0000 UTC m=+0.065557198 container create 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:41 localhost podman[62015]: 2025-12-02 08:12:41.513154558 +0000 UTC m=+0.093796105 container create c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8.scope. Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost systemd[1]: Started libpod-conmon-c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.scope. Dec 2 03:12:41 localhost podman[62014]: 2025-12-02 08:12:41.541379086 +0000 UTC m=+0.126259551 container init 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container) Dec 2 03:12:41 localhost podman[62014]: 2025-12-02 08:12:41.548644407 +0000 UTC m=+0.133524872 container start 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtnodedevd, architecture=x86_64) Dec 2 03:12:41 localhost podman[62014]: 2025-12-02 08:12:41.448775134 +0000 UTC m=+0.033655619 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:41 localhost systemd[1]: Started libcrun container. Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee6dae47ff617871c47add2aa57f33c2f7e68905855055afb3a7b04648ecacd/merged/etc/target supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eee6dae47ff617871c47add2aa57f33c2f7e68905855055afb3a7b04648ecacd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:41 localhost podman[62015]: 2025-12-02 08:12:41.465943127 +0000 UTC m=+0.046584744 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:12:41 localhost podman[62015]: 2025-12-02 08:12:41.582373446 +0000 UTC m=+0.163015003 container init c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z) Dec 2 03:12:41 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:41 localhost systemd[1]: Started Session c4 of User root. Dec 2 03:12:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:12:41 localhost podman[62015]: 2025-12-02 08:12:41.615814828 +0000 UTC m=+0.196456365 container start c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:12:41 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=230f4ebc92ecc6f511b0217abb58f1b6 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 2 03:12:41 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:41 localhost systemd[1]: Started Session c5 of User root. Dec 2 03:12:41 localhost podman[62069]: 2025-12-02 08:12:41.693246141 +0000 UTC m=+0.067826500 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 2 03:12:41 localhost systemd[1]: session-c5.scope: Deactivated successfully. Dec 2 03:12:41 localhost systemd[1]: session-c4.scope: Deactivated successfully. Dec 2 03:12:41 localhost kernel: Loading iSCSI transport class v2.0-870. Dec 2 03:12:41 localhost podman[62069]: 2025-12-02 08:12:41.784720912 +0000 UTC m=+0.159301291 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, release=1761123044, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:12:41 localhost podman[62069]: unhealthy Dec 2 03:12:41 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:12:41 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Failed with result 'exit-code'. Dec 2 03:12:42 localhost podman[62191]: 2025-12-02 08:12:42.109885703 +0000 UTC m=+0.089461566 container create 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}) Dec 2 03:12:42 localhost systemd[1]: Started libpod-conmon-4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56.scope. Dec 2 03:12:42 localhost podman[62191]: 2025-12-02 08:12:42.064000469 +0000 UTC m=+0.043576392 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost systemd[1]: Started libcrun container. Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost podman[62191]: 2025-12-02 08:12:42.188945312 +0000 UTC m=+0.168521185 container init 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 2 03:12:42 localhost podman[62191]: 2025-12-02 08:12:42.199152654 +0000 UTC m=+0.178728527 container start 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtstoraged, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:12:42 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:42 localhost systemd[1]: Started Session c6 of User root. Dec 2 03:12:42 localhost systemd[1]: session-c6.scope: Deactivated successfully. Dec 2 03:12:42 localhost podman[62293]: 2025-12-02 08:12:42.612245198 +0000 UTC m=+0.064774396 container create df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:12:42 localhost systemd[1]: Started libpod-conmon-df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca.scope. Dec 2 03:12:42 localhost systemd[1]: Started libcrun container. Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost podman[62293]: 2025-12-02 08:12:42.572925964 +0000 UTC m=+0.025455222 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:42 localhost podman[62293]: 2025-12-02 08:12:42.678582906 +0000 UTC m=+0.131112094 container init df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:12:42 localhost podman[62293]: 2025-12-02 08:12:42.684073357 +0000 UTC m=+0.136602555 container start df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, container_name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 03:12:42 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:42 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:42 localhost systemd[1]: Started Session c7 of User root. Dec 2 03:12:42 localhost systemd[1]: session-c7.scope: Deactivated successfully. Dec 2 03:12:43 localhost podman[62401]: 2025-12-02 08:12:43.05718845 +0000 UTC m=+0.087418460 container create 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, distribution-scope=public, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, tcib_managed=true, container_name=nova_virtproxyd) Dec 2 03:12:43 localhost systemd[1]: Started libpod-conmon-16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3.scope. Dec 2 03:12:43 localhost podman[62401]: 2025-12-02 08:12:43.006550054 +0000 UTC m=+0.036780094 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost systemd[1]: Started libcrun container. Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:12:43 localhost podman[62401]: 2025-12-02 08:12:43.14465254 +0000 UTC m=+0.174882540 container init 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_virtproxyd, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, build-date=2025-11-19T00:35:22Z, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Dec 2 03:12:43 localhost systemd[1]: tmp-crun.z6QSUe.mount: Deactivated successfully. Dec 2 03:12:43 localhost podman[62401]: 2025-12-02 08:12:43.157920506 +0000 UTC m=+0.188150506 container start 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, container_name=nova_virtproxyd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:12:43 localhost python3[61299]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:12:43 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:12:43 localhost systemd[1]: Started Session c8 of User root. Dec 2 03:12:43 localhost systemd[1]: session-c8.scope: Deactivated successfully. Dec 2 03:12:43 localhost python3[62484]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:44 localhost python3[62500]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:44 localhost python3[62516]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:44 localhost python3[62532]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:44 localhost python3[62548]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62564]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62580]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62596]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:45 localhost python3[62612]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:46 localhost python3[62628]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:46 localhost python3[62644]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:46 localhost python3[62660]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:46 localhost python3[62676]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62692]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62708]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62724]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:47 localhost python3[62740]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62756]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:12:48 localhost python3[62817]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:49 localhost python3[62846]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:49 localhost python3[62875]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:50 localhost python3[62904]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:50 localhost python3[62933]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:51 localhost python3[62962]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:51 localhost python3[62991]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:52 localhost python3[63020]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:12:52 localhost podman[63050]: 2025-12-02 08:12:52.962420628 +0000 UTC m=+0.093383874 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:12:53 localhost python3[63049]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663168.2486794-99745-202156748293961/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:12:53 localhost podman[63050]: 2025-12-02 08:12:53.170001258 +0000 UTC m=+0.300964504 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:12:53 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:12:53 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:12:53 localhost systemd[61605]: Activating special unit Exit the Session... Dec 2 03:12:53 localhost systemd[61605]: Stopped target Main User Target. Dec 2 03:12:53 localhost systemd[61605]: Stopped target Basic System. Dec 2 03:12:53 localhost systemd[61605]: Stopped target Paths. Dec 2 03:12:53 localhost systemd[61605]: Stopped target Sockets. Dec 2 03:12:53 localhost systemd[61605]: Stopped target Timers. Dec 2 03:12:53 localhost systemd[61605]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:12:53 localhost systemd[61605]: Closed D-Bus User Message Bus Socket. Dec 2 03:12:53 localhost systemd[61605]: Stopped Create User's Volatile Files and Directories. Dec 2 03:12:53 localhost systemd[61605]: Removed slice User Application Slice. Dec 2 03:12:53 localhost systemd[61605]: Reached target Shutdown. Dec 2 03:12:53 localhost systemd[61605]: Finished Exit the Session. Dec 2 03:12:53 localhost systemd[61605]: Reached target Exit the Session. Dec 2 03:12:53 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:12:53 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:12:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:12:53 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:12:53 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:12:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:12:53 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:12:53 localhost python3[63095]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:12:53 localhost systemd[1]: Reloading. Dec 2 03:12:53 localhost systemd-sysv-generator[63127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:53 localhost systemd-rc-local-generator[63122]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:54 localhost python3[63149]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:54 localhost systemd[1]: Reloading. Dec 2 03:12:54 localhost systemd-sysv-generator[63178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:54 localhost systemd-rc-local-generator[63173]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:54 localhost systemd[1]: Starting collectd container... Dec 2 03:12:54 localhost systemd[1]: Started collectd container. Dec 2 03:12:55 localhost python3[63216]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:55 localhost systemd[1]: Reloading. Dec 2 03:12:55 localhost systemd-rc-local-generator[63247]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:55 localhost systemd-sysv-generator[63250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:55 localhost systemd[1]: Starting iscsid container... Dec 2 03:12:55 localhost systemd[1]: Started iscsid container. Dec 2 03:12:56 localhost python3[63283]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:56 localhost systemd[1]: Reloading. Dec 2 03:12:56 localhost systemd-rc-local-generator[63313]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:56 localhost systemd-sysv-generator[63316]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:57 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Dec 2 03:12:57 localhost systemd[1]: Started nova_virtlogd_wrapper container. Dec 2 03:12:57 localhost python3[63350]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:12:58 localhost systemd[1]: Reloading. Dec 2 03:12:58 localhost systemd-rc-local-generator[63376]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:58 localhost systemd-sysv-generator[63380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:59 localhost systemd[1]: Starting nova_virtnodedevd container... Dec 2 03:12:59 localhost tripleo-start-podman-container[63390]: Creating additional drop-in dependency for "nova_virtnodedevd" (21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8) Dec 2 03:12:59 localhost systemd[1]: Reloading. Dec 2 03:12:59 localhost systemd-sysv-generator[63447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:12:59 localhost systemd-rc-local-generator[63444]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:12:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:12:59 localhost systemd[1]: Started nova_virtnodedevd container. Dec 2 03:13:00 localhost python3[63472]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:00 localhost systemd[1]: Reloading. Dec 2 03:13:00 localhost systemd-rc-local-generator[63503]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:00 localhost systemd-sysv-generator[63506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:00 localhost systemd[1]: Starting nova_virtproxyd container... Dec 2 03:13:00 localhost tripleo-start-podman-container[63513]: Creating additional drop-in dependency for "nova_virtproxyd" (16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3) Dec 2 03:13:00 localhost systemd[1]: Reloading. Dec 2 03:13:00 localhost systemd-sysv-generator[63571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:00 localhost systemd-rc-local-generator[63566]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:01 localhost systemd[1]: Started nova_virtproxyd container. Dec 2 03:13:02 localhost python3[63596]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:02 localhost systemd[1]: Reloading. Dec 2 03:13:02 localhost systemd-sysv-generator[63627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:02 localhost systemd-rc-local-generator[63622]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:03 localhost systemd[1]: Starting nova_virtqemud container... Dec 2 03:13:03 localhost tripleo-start-podman-container[63636]: Creating additional drop-in dependency for "nova_virtqemud" (df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca) Dec 2 03:13:03 localhost systemd[1]: Reloading. Dec 2 03:13:03 localhost systemd-rc-local-generator[63693]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:03 localhost systemd-sysv-generator[63698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:03 localhost systemd[1]: Started nova_virtqemud container. Dec 2 03:13:04 localhost python3[63721]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:04 localhost systemd[1]: Reloading. Dec 2 03:13:04 localhost systemd-rc-local-generator[63748]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:04 localhost systemd-sysv-generator[63753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:04 localhost systemd[1]: Starting nova_virtsecretd container... Dec 2 03:13:04 localhost tripleo-start-podman-container[63761]: Creating additional drop-in dependency for "nova_virtsecretd" (d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f) Dec 2 03:13:04 localhost systemd[1]: Reloading. Dec 2 03:13:04 localhost systemd-sysv-generator[63822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:04 localhost systemd-rc-local-generator[63819]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:05 localhost systemd[1]: Started nova_virtsecretd container. Dec 2 03:13:05 localhost python3[63847]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:05 localhost systemd[1]: Reloading. Dec 2 03:13:06 localhost systemd-rc-local-generator[63876]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:06 localhost systemd-sysv-generator[63880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:06 localhost systemd[1]: Starting nova_virtstoraged container... Dec 2 03:13:06 localhost tripleo-start-podman-container[63887]: Creating additional drop-in dependency for "nova_virtstoraged" (4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56) Dec 2 03:13:06 localhost systemd[1]: Reloading. Dec 2 03:13:06 localhost systemd-rc-local-generator[63936]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:06 localhost systemd-sysv-generator[63943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:08 localhost systemd[1]: Started nova_virtstoraged container. Dec 2 03:13:08 localhost python3[63970]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:13:08 localhost systemd[1]: Reloading. Dec 2 03:13:08 localhost systemd-rc-local-generator[63994]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:13:08 localhost systemd-sysv-generator[63999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:13:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:13:09 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:09 localhost systemd[1]: tmp-crun.746POh.mount: Deactivated successfully. Dec 2 03:13:09 localhost systemd[1]: Started libcrun container. Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost podman[64010]: 2025-12-02 08:13:09.330934147 +0000 UTC m=+0.146197950 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, container_name=rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:09 localhost podman[64010]: 2025-12-02 08:13:09.341639422 +0000 UTC m=+0.156903225 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-rsyslog-container, tcib_managed=true) Dec 2 03:13:09 localhost podman[64010]: rsyslog Dec 2 03:13:09 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:09 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:13:09 localhost podman[64041]: 2025-12-02 08:13:09.495804481 +0000 UTC m=+0.040605851 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12) Dec 2 03:13:09 localhost podman[64041]: 2025-12-02 08:13:09.519852843 +0000 UTC m=+0.064654193 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-rsyslog, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:09 localhost podman[64057]: 2025-12-02 08:13:09.607471048 +0000 UTC m=+0.048992222 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:13:09 localhost podman[64057]: rsyslog Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:09 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Dec 2 03:13:09 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:09 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:09 localhost systemd[1]: Started libcrun container. Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:09 localhost podman[64085]: 2025-12-02 08:13:09.922825979 +0000 UTC m=+0.112015898 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git) Dec 2 03:13:09 localhost podman[64085]: 2025-12-02 08:13:09.933138163 +0000 UTC m=+0.122328092 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, version=17.1.12, container_name=rsyslog, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3) Dec 2 03:13:09 localhost podman[64085]: rsyslog Dec 2 03:13:09 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:09 localhost python3[64086]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:13:10 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:13:10 localhost podman[64109]: 2025-12-02 08:13:10.081690977 +0000 UTC m=+0.040248030 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, container_name=rsyslog, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:13:10 localhost podman[64109]: 2025-12-02 08:13:10.102913062 +0000 UTC m=+0.061470105 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-rsyslog-container, version=17.1.12, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:10 localhost podman[64123]: 2025-12-02 08:13:10.16963118 +0000 UTC m=+0.041045681 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, container_name=rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 2 03:13:10 localhost podman[64123]: rsyslog Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:10 localhost systemd[1]: var-lib-containers-storage-overlay-ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336-merged.mount: Deactivated successfully. Dec 2 03:13:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Dec 2 03:13:10 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:10 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:10 localhost systemd[1]: Started libcrun container. Dec 2 03:13:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:10 localhost podman[64166]: 2025-12-02 08:13:10.465637688 +0000 UTC m=+0.122496197 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, distribution-scope=public) Dec 2 03:13:10 localhost podman[64166]: 2025-12-02 08:13:10.47516724 +0000 UTC m=+0.132025749 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:10 localhost podman[64166]: rsyslog Dec 2 03:13:10 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:10 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:13:10 localhost podman[64231]: 2025-12-02 08:13:10.639300424 +0000 UTC m=+0.060616592 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-rsyslog, container_name=rsyslog, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:10 localhost podman[64231]: 2025-12-02 08:13:10.660996732 +0000 UTC m=+0.082312870 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044) Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:13:10 localhost podman[64260]: 2025-12-02 08:13:10.773742759 +0000 UTC m=+0.081355323 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:13:10 localhost podman[64259]: 2025-12-02 08:13:10.809409112 +0000 UTC m=+0.121180720 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, release=1761123044) Dec 2 03:13:10 localhost podman[64259]: rsyslog Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:10 localhost podman[64260]: 2025-12-02 08:13:10.858399922 +0000 UTC m=+0.166012476 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:10 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:13:10 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Dec 2 03:13:10 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:10 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:11 localhost systemd[1]: Started libcrun container. Dec 2 03:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:11 localhost podman[64349]: 2025-12-02 08:13:11.116843755 +0000 UTC m=+0.108691116 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64) Dec 2 03:13:11 localhost podman[64349]: 2025-12-02 08:13:11.122892942 +0000 UTC m=+0.114740303 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, release=1761123044, config_id=tripleo_step3) Dec 2 03:13:11 localhost podman[64349]: rsyslog Dec 2 03:13:11 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:11 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:13:11 localhost podman[64385]: 2025-12-02 08:13:11.31295717 +0000 UTC m=+0.053917938 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog) Dec 2 03:13:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:11 localhost systemd[1]: var-lib-containers-storage-overlay-ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336-merged.mount: Deactivated successfully. Dec 2 03:13:11 localhost podman[64385]: 2025-12-02 08:13:11.344415377 +0000 UTC m=+0.085376115 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, container_name=rsyslog, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 03:13:11 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:11 localhost podman[64399]: 2025-12-02 08:13:11.433179032 +0000 UTC m=+0.060230280 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:13:11 localhost podman[64399]: rsyslog Dec 2 03:13:11 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:11 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Dec 2 03:13:11 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:11 localhost systemd[1]: Starting rsyslog container... Dec 2 03:13:11 localhost python3[64424]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005541913 step=3 update_config_hash_only=False Dec 2 03:13:11 localhost systemd[1]: Started libcrun container. Dec 2 03:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffb2f622809786e8c5fa4ed1e24ed2fc8ffb08acf1d089c8dce2a99369c6a336/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 2 03:13:11 localhost podman[64429]: 2025-12-02 08:13:11.756865933 +0000 UTC m=+0.138787166 container init a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:13:11 localhost podman[64429]: 2025-12-02 08:13:11.767285631 +0000 UTC m=+0.149206844 container start a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:49:49Z, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:13:11 localhost podman[64429]: rsyslog Dec 2 03:13:11 localhost systemd[1]: Started rsyslog container. Dec 2 03:13:11 localhost systemd[1]: libpod-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11.scope: Deactivated successfully. Dec 2 03:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:13:11 localhost podman[64463]: 2025-12-02 08:13:11.952269739 +0000 UTC m=+0.075406410 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Dec 2 03:13:11 localhost podman[64463]: 2025-12-02 08:13:11.967058406 +0000 UTC m=+0.090195097 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:13:11 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:13:12 localhost podman[64462]: 2025-12-02 08:13:12.030766692 +0000 UTC m=+0.156271448 container died a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, config_id=tripleo_step3, container_name=rsyslog, version=17.1.12, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:12 localhost podman[64462]: 2025-12-02 08:13:12.057142718 +0000 UTC m=+0.182647404 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, release=1761123044, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:13:12 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:13:12 localhost podman[64510]: 2025-12-02 08:13:12.134897731 +0000 UTC m=+0.048983541 container cleanup a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c70cec5d3310de4d4589e1a95c8fd3c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, vcs-type=git) Dec 2 03:13:12 localhost podman[64510]: rsyslog Dec 2 03:13:12 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:12 localhost python3[64508]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:13:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a68110bf93a17cc09560fbaa73e0aa62bc7d4bb5fb9957036f7904918efbde11-userdata-shm.mount: Deactivated successfully. Dec 2 03:13:12 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Dec 2 03:13:12 localhost systemd[1]: Stopped rsyslog container. Dec 2 03:13:12 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Dec 2 03:13:12 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 2 03:13:12 localhost systemd[1]: Failed to start rsyslog container. Dec 2 03:13:12 localhost python3[64538]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:13:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:13:23 localhost systemd[1]: tmp-crun.kqf6hy.mount: Deactivated successfully. Dec 2 03:13:23 localhost podman[64539]: 2025-12-02 08:13:23.463309581 +0000 UTC m=+0.095085012 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:13:23 localhost podman[64539]: 2025-12-02 08:13:23.662398557 +0000 UTC m=+0.294173948 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd) Dec 2 03:13:23 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:13:41 localhost podman[64569]: 2025-12-02 08:13:41.457029628 +0000 UTC m=+0.097655902 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd) Dec 2 03:13:41 localhost podman[64569]: 2025-12-02 08:13:41.467489177 +0000 UTC m=+0.108115401 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 2 03:13:41 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:13:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:13:42 localhost podman[64590]: 2025-12-02 08:13:42.44223618 +0000 UTC m=+0.085700573 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, container_name=iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:13:42 localhost podman[64590]: 2025-12-02 08:13:42.478397867 +0000 UTC m=+0.121862230 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:13:42 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:13:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:13:54 localhost podman[64610]: 2025-12-02 08:13:54.485301615 +0000 UTC m=+0.098364693 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:13:54 localhost podman[64610]: 2025-12-02 08:13:54.708091274 +0000 UTC m=+0.321154332 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:13:54 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:14:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:14:12 localhost systemd[1]: tmp-crun.f724w8.mount: Deactivated successfully. Dec 2 03:14:12 localhost podman[64655]: 2025-12-02 08:14:12.032170658 +0000 UTC m=+0.110135896 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, container_name=collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:14:12 localhost podman[64655]: 2025-12-02 08:14:12.036627961 +0000 UTC m=+0.114593199 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Dec 2 03:14:12 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:14:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:14:13 localhost systemd[1]: tmp-crun.R2t2iE.mount: Deactivated successfully. Dec 2 03:14:13 localhost podman[64739]: 2025-12-02 08:14:13.268665695 +0000 UTC m=+0.073392274 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:14:13 localhost podman[64739]: 2025-12-02 08:14:13.284160172 +0000 UTC m=+0.088886761 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:14:13 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:14:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:14:25 localhost podman[64758]: 2025-12-02 08:14:25.444963531 +0000 UTC m=+0.089411226 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 03:14:25 localhost podman[64758]: 2025-12-02 08:14:25.627008428 +0000 UTC m=+0.271456143 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=) Dec 2 03:14:25 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:14:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:14:42 localhost systemd[1]: tmp-crun.tyBjix.mount: Deactivated successfully. Dec 2 03:14:42 localhost podman[64787]: 2025-12-02 08:14:42.443865733 +0000 UTC m=+0.088809812 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64) Dec 2 03:14:42 localhost podman[64787]: 2025-12-02 08:14:42.479579563 +0000 UTC m=+0.124523652 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-collectd, version=17.1.12, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 2 03:14:42 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:14:43 localhost podman[64808]: 2025-12-02 08:14:43.443047819 +0000 UTC m=+0.088058201 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:14:43 localhost podman[64808]: 2025-12-02 08:14:43.479972832 +0000 UTC m=+0.124983114 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 2 03:14:43 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:14:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:14:56 localhost podman[64827]: 2025-12-02 08:14:56.419679306 +0000 UTC m=+0.064918930 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:14:56 localhost podman[64827]: 2025-12-02 08:14:56.58008218 +0000 UTC m=+0.225321874 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:14:56 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:15:13 localhost systemd[1]: tmp-crun.zCr1XO.mount: Deactivated successfully. Dec 2 03:15:13 localhost podman[64868]: 2025-12-02 08:15:13.471689534 +0000 UTC m=+0.097541294 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:15:13 localhost podman[64868]: 2025-12-02 08:15:13.484013845 +0000 UTC m=+0.109865565 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git) Dec 2 03:15:13 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:15:13 localhost systemd[1]: tmp-crun.9FjElu.mount: Deactivated successfully. Dec 2 03:15:13 localhost podman[64904]: 2025-12-02 08:15:13.621528246 +0000 UTC m=+0.097478672 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:15:13 localhost podman[64904]: 2025-12-02 08:15:13.65594064 +0000 UTC m=+0.131891056 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container) Dec 2 03:15:13 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:15:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:15:27 localhost systemd[1]: tmp-crun.uel9My.mount: Deactivated successfully. Dec 2 03:15:27 localhost podman[64966]: 2025-12-02 08:15:27.43290632 +0000 UTC m=+0.074497505 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, release=1761123044, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:15:27 localhost podman[64966]: 2025-12-02 08:15:27.668069596 +0000 UTC m=+0.309660791 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:15:27 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:15:44 localhost systemd[1]: tmp-crun.HyTlj4.mount: Deactivated successfully. Dec 2 03:15:44 localhost podman[64997]: 2025-12-02 08:15:44.418269882 +0000 UTC m=+0.063818580 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public) Dec 2 03:15:44 localhost podman[64997]: 2025-12-02 08:15:44.453965251 +0000 UTC m=+0.099513939 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Dec 2 03:15:44 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:15:44 localhost podman[64996]: 2025-12-02 08:15:44.487581182 +0000 UTC m=+0.130493727 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:15:44 localhost podman[64996]: 2025-12-02 08:15:44.497438945 +0000 UTC m=+0.140351440 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:15:44 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:15:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:15:58 localhost podman[65035]: 2025-12-02 08:15:58.437771033 +0000 UTC m=+0.081953761 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:15:58 localhost podman[65035]: 2025-12-02 08:15:58.631008338 +0000 UTC m=+0.275191096 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true) Dec 2 03:15:58 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:16:15 localhost systemd[1]: tmp-crun.YVI28A.mount: Deactivated successfully. Dec 2 03:16:15 localhost podman[65065]: 2025-12-02 08:16:15.486219295 +0000 UTC m=+0.129989743 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:16:15 localhost podman[65065]: 2025-12-02 08:16:15.498024402 +0000 UTC m=+0.141794830 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:16:15 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:16:15 localhost podman[65066]: 2025-12-02 08:16:15.488522469 +0000 UTC m=+0.128415990 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:16:15 localhost podman[65066]: 2025-12-02 08:16:15.57302525 +0000 UTC m=+0.212918751 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:16:15 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:16:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:16:29 localhost podman[65230]: 2025-12-02 08:16:29.417344107 +0000 UTC m=+0.061795703 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:16:29 localhost podman[65230]: 2025-12-02 08:16:29.630735009 +0000 UTC m=+0.275186595 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:49:46Z) Dec 2 03:16:29 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:16:46 localhost podman[65259]: 2025-12-02 08:16:46.435383705 +0000 UTC m=+0.075471542 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, batch=17.1_20251118.1) Dec 2 03:16:46 localhost podman[65259]: 2025-12-02 08:16:46.448061656 +0000 UTC m=+0.088149523 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, container_name=collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:16:46 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:16:46 localhost podman[65260]: 2025-12-02 08:16:46.497130356 +0000 UTC m=+0.132793950 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 2 03:16:46 localhost podman[65260]: 2025-12-02 08:16:46.535288744 +0000 UTC m=+0.170952358 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4) Dec 2 03:16:46 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:17:00 localhost podman[65297]: 2025-12-02 08:17:00.448927441 +0000 UTC m=+0.087634919 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 2 03:17:00 localhost podman[65297]: 2025-12-02 08:17:00.659909228 +0000 UTC m=+0.298616676 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, tcib_managed=true, version=17.1.12, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=) Dec 2 03:17:00 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4435 writes, 20K keys, 4435 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4435 writes, 447 syncs, 9.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 211 writes, 571 keys, 211 commit groups, 1.0 writes per commit group, ingest: 0.53 MB, 0.00 MB/s#012Interval WAL: 211 writes, 103 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5176 writes, 22K keys, 5176 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5176 writes, 608 syncs, 8.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 184 writes, 477 keys, 184 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s#012Interval WAL: 184 writes, 91 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:17:17 localhost systemd[1]: tmp-crun.Oguc3s.mount: Deactivated successfully. Dec 2 03:17:17 localhost podman[65326]: 2025-12-02 08:17:17.446217204 +0000 UTC m=+0.090354854 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:17:17 localhost podman[65326]: 2025-12-02 08:17:17.457044854 +0000 UTC m=+0.101182554 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:17:17 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:17:17 localhost podman[65327]: 2025-12-02 08:17:17.553731903 +0000 UTC m=+0.190889040 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12) Dec 2 03:17:17 localhost podman[65327]: 2025-12-02 08:17:17.587898319 +0000 UTC m=+0.225055456 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:17:17 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:17:18 localhost python3[65411]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:18 localhost python3[65456]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663437.7772546-107020-12998145733000/source _original_basename=tmpmxtlsuev follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:19 localhost python3[65518]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:20 localhost python3[65561]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663439.6405184-107118-211606194337718/source _original_basename=tmpe9b8mm4p follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:20 localhost python3[65623]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:21 localhost python3[65666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663440.6335638-107174-235607584383599/source _original_basename=tmplxiaaue1 follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:22 localhost python3[65728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:22 localhost python3[65771]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663441.6692474-107229-229965608257019/source _original_basename=tmp15vm1sr1 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:23 localhost python3[65801]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 03:17:23 localhost systemd[1]: Reloading. Dec 2 03:17:23 localhost systemd-sysv-generator[65829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:23 localhost systemd-rc-local-generator[65822]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:23 localhost systemd[1]: Reloading. Dec 2 03:17:23 localhost systemd-rc-local-generator[65866]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:23 localhost systemd-sysv-generator[65871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost python3[65891]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:17:24 localhost systemd[1]: Reloading. Dec 2 03:17:24 localhost systemd-rc-local-generator[65915]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:24 localhost systemd-sysv-generator[65918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost systemd[1]: Reloading. Dec 2 03:17:24 localhost systemd-sysv-generator[65961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:24 localhost systemd-rc-local-generator[65958]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:24 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Dec 2 03:17:25 localhost python3[65982]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:17:25 localhost systemd[1]: Reloading. Dec 2 03:17:25 localhost systemd-rc-local-generator[66033]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:25 localhost systemd-sysv-generator[66037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:25 localhost systemd[1]: Starting dnf makecache... Dec 2 03:17:25 localhost dnf[66047]: Updating Subscription Management repositories. Dec 2 03:17:25 localhost python3[66103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:26 localhost python3[66160]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663445.6787555-107411-109576048817741/source _original_basename=tmp36v5jdot follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:26 localhost python3[66220]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:17:26 localhost systemd[1]: Reloading. Dec 2 03:17:27 localhost systemd-sysv-generator[66277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:27 localhost systemd-rc-local-generator[66274]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:27 localhost dnf[66047]: Metadata cache refreshed recently. Dec 2 03:17:27 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 03:17:27 localhost systemd[1]: Finished dnf makecache. Dec 2 03:17:27 localhost systemd[1]: dnf-makecache.service: Consumed 2.101s CPU time. Dec 2 03:17:27 localhost podman[66362]: Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.823851725 +0000 UTC m=+0.059205021 container create bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 03:17:27 localhost systemd[1]: Started libpod-conmon-bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0.scope. Dec 2 03:17:27 localhost systemd[1]: Started libcrun container. Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.870895489 +0000 UTC m=+0.106248805 container init bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.881921304 +0000 UTC m=+0.117274580 container start bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, version=7, RELEASE=main, release=1763362218, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.882099329 +0000 UTC m=+0.117452595 container attach bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, io.buildah.version=1.41.4) Dec 2 03:17:27 localhost great_heyrovsky[66377]: 167 167 Dec 2 03:17:27 localhost systemd[1]: libpod-bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0.scope: Deactivated successfully. Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.887785707 +0000 UTC m=+0.123139053 container died bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 03:17:27 localhost podman[66362]: 2025-12-02 08:17:27.806832703 +0000 UTC m=+0.042185989 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:17:27 localhost podman[66382]: 2025-12-02 08:17:27.958145346 +0000 UTC m=+0.063950413 container remove bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_heyrovsky, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , distribution-scope=public) Dec 2 03:17:27 localhost systemd[1]: libpod-conmon-bfba0677b1a42588249889bb281e5e624ad8b5d18a9ce3d666bb5bfe6a5b75f0.scope: Deactivated successfully. Dec 2 03:17:28 localhost podman[66402]: Dec 2 03:17:28 localhost podman[66402]: 2025-12-02 08:17:28.127649713 +0000 UTC m=+0.064316663 container create 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, ceph=True, architecture=x86_64, GIT_BRANCH=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.expose-services=) Dec 2 03:17:28 localhost systemd[1]: Started libpod-conmon-615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c.scope. Dec 2 03:17:28 localhost systemd[1]: Started libcrun container. Dec 2 03:17:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c7febe39725fab0d61a6143d3683d801789cb15e98b14674d3e7becd398c4/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c7febe39725fab0d61a6143d3683d801789cb15e98b14674d3e7becd398c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c0c7febe39725fab0d61a6143d3683d801789cb15e98b14674d3e7becd398c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 03:17:28 localhost podman[66402]: 2025-12-02 08:17:28.19609991 +0000 UTC m=+0.132766860 container init 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, ceph=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 03:17:28 localhost podman[66402]: 2025-12-02 08:17:28.09973315 +0000 UTC m=+0.036400120 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:17:28 localhost podman[66402]: 2025-12-02 08:17:28.206324983 +0000 UTC m=+0.142991933 container start 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1763362218, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 03:17:28 localhost podman[66402]: 2025-12-02 08:17:28.20658972 +0000 UTC m=+0.143256710 container attach 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 03:17:28 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Dec 2 03:17:28 localhost python3[66440]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:28 localhost systemd[1]: var-lib-containers-storage-overlay-af9ff8a7a12f523ef36617097e6108784abba47ec48d635a283b941c3d6d20d8-merged.mount: Deactivated successfully. Dec 2 03:17:29 localhost stupefied_taussig[66416]: [ Dec 2 03:17:29 localhost stupefied_taussig[66416]: { Dec 2 03:17:29 localhost stupefied_taussig[66416]: "available": false, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "ceph_device": false, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "lsm_data": {}, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "lvs": [], Dec 2 03:17:29 localhost stupefied_taussig[66416]: "path": "/dev/sr0", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "rejected_reasons": [ Dec 2 03:17:29 localhost stupefied_taussig[66416]: "Insufficient space (<5GB)", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "Has a FileSystem" Dec 2 03:17:29 localhost stupefied_taussig[66416]: ], Dec 2 03:17:29 localhost stupefied_taussig[66416]: "sys_api": { Dec 2 03:17:29 localhost stupefied_taussig[66416]: "actuators": null, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "device_nodes": "sr0", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "human_readable_size": "482.00 KB", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "id_bus": "ata", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "model": "QEMU DVD-ROM", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "nr_requests": "2", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "partitions": {}, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "path": "/dev/sr0", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "removable": "1", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "rev": "2.5+", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "ro": "0", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "rotational": "1", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "sas_address": "", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "sas_device_handle": "", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "scheduler_mode": "mq-deadline", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "sectors": 0, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "sectorsize": "2048", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "size": 493568.0, Dec 2 03:17:29 localhost stupefied_taussig[66416]: "support_discard": "0", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "type": "disk", Dec 2 03:17:29 localhost stupefied_taussig[66416]: "vendor": "QEMU" Dec 2 03:17:29 localhost stupefied_taussig[66416]: } Dec 2 03:17:29 localhost stupefied_taussig[66416]: } Dec 2 03:17:29 localhost stupefied_taussig[66416]: ] Dec 2 03:17:29 localhost systemd[1]: libpod-615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c.scope: Deactivated successfully. Dec 2 03:17:29 localhost podman[66402]: 2025-12-02 08:17:29.128519746 +0000 UTC m=+1.065186676 container died 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:17:29 localhost systemd[1]: var-lib-containers-storage-overlay-5c0c7febe39725fab0d61a6143d3683d801789cb15e98b14674d3e7becd398c4-merged.mount: Deactivated successfully. Dec 2 03:17:29 localhost podman[68053]: 2025-12-02 08:17:29.234135833 +0000 UTC m=+0.092232697 container remove 615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_taussig, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 2 03:17:29 localhost systemd[1]: libpod-conmon-615be9ead2c5dcd4403460bc5a56ec910cfcefc1a40bf2568e8f2ae7777b810c.scope: Deactivated successfully. Dec 2 03:17:30 localhost ansible-async_wrapper.py[68202]: Invoked with 107953783535 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663449.7532995-107547-160712638006330/AnsiballZ_command.py _ Dec 2 03:17:30 localhost ansible-async_wrapper.py[68205]: Starting module and watcher Dec 2 03:17:30 localhost ansible-async_wrapper.py[68205]: Start watching 68206 (3600) Dec 2 03:17:30 localhost ansible-async_wrapper.py[68206]: Start module (68206) Dec 2 03:17:30 localhost ansible-async_wrapper.py[68202]: Return async_wrapper task started. Dec 2 03:17:30 localhost python3[68226]: ansible-ansible.legacy.async_status Invoked with jid=107953783535.68202 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:17:31 localhost podman[68240]: 2025-12-02 08:17:31.450233068 +0000 UTC m=+0.084532763 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Dec 2 03:17:31 localhost podman[68240]: 2025-12-02 08:17:31.645926131 +0000 UTC m=+0.280225806 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd) Dec 2 03:17:31 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:17:34 localhost puppet-user[68225]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:17:34 localhost puppet-user[68225]: (file: /etc/puppet/hiera.yaml) Dec 2 03:17:34 localhost puppet-user[68225]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:17:34 localhost puppet-user[68225]: (file & line not available) Dec 2 03:17:34 localhost puppet-user[68225]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:17:34 localhost puppet-user[68225]: (file & line not available) Dec 2 03:17:34 localhost puppet-user[68225]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:34 localhost puppet-user[68225]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:34 localhost puppet-user[68225]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:34 localhost puppet-user[68225]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:34 localhost puppet-user[68225]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:17:34 localhost puppet-user[68225]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:17:34 localhost puppet-user[68225]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:17:34 localhost puppet-user[68225]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:17:34 localhost puppet-user[68225]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.21 seconds Dec 2 03:17:35 localhost ansible-async_wrapper.py[68205]: 68206 still running (3600) Dec 2 03:17:40 localhost ansible-async_wrapper.py[68205]: 68206 still running (3595) Dec 2 03:17:40 localhost python3[68455]: ansible-ansible.legacy.async_status Invoked with jid=107953783535.68202 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:41 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:17:41 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 03:17:41 localhost systemd[1]: Reloading. Dec 2 03:17:41 localhost systemd-sysv-generator[68535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:41 localhost systemd-rc-local-generator[68528]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:41 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 03:17:42 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 03:17:42 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 03:17:42 localhost systemd[1]: man-db-cache-update.service: Consumed 1.217s CPU time. Dec 2 03:17:42 localhost systemd[1]: run-rf94fc1652d754b0fbd559e1a9ee4d21e.service: Deactivated successfully. Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}73f5e19a2a837d39ae500d3423960b5d4f0c8c0d1d962d4648bd104a53eb0bb3' Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Dec 2 03:17:43 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Dec 2 03:17:45 localhost ansible-async_wrapper.py[68205]: 68206 still running (3590) Dec 2 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:17:48 localhost puppet-user[68225]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Dec 2 03:17:48 localhost systemd[1]: tmp-crun.gm6PYz.mount: Deactivated successfully. Dec 2 03:17:48 localhost podman[69559]: 2025-12-02 08:17:48.456410944 +0000 UTC m=+0.086378403 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 2 03:17:48 localhost systemd[1]: Reloading. Dec 2 03:17:48 localhost podman[69559]: 2025-12-02 08:17:48.492775672 +0000 UTC m=+0.122743171 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Dec 2 03:17:48 localhost podman[69558]: 2025-12-02 08:17:48.509339161 +0000 UTC m=+0.139351681 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z) Dec 2 03:17:48 localhost podman[69558]: 2025-12-02 08:17:48.52086531 +0000 UTC m=+0.150877810 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044) Dec 2 03:17:48 localhost systemd-sysv-generator[69627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:48 localhost systemd-rc-local-generator[69620]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:48 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:17:48 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:17:48 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Dec 2 03:17:48 localhost snmpd[69635]: Can't find directory of RPM packages Dec 2 03:17:48 localhost snmpd[69635]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Dec 2 03:17:48 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Dec 2 03:17:48 localhost systemd[1]: Reloading. Dec 2 03:17:49 localhost systemd-rc-local-generator[69658]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:49 localhost systemd-sysv-generator[69661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:49 localhost systemd[1]: Reloading. Dec 2 03:17:49 localhost systemd-rc-local-generator[69695]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:17:49 localhost systemd-sysv-generator[69700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:17:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:17:49 localhost puppet-user[68225]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Dec 2 03:17:49 localhost puppet-user[68225]: Notice: Applied catalog in 15.20 seconds Dec 2 03:17:49 localhost puppet-user[68225]: Application: Dec 2 03:17:49 localhost puppet-user[68225]: Initial environment: production Dec 2 03:17:49 localhost puppet-user[68225]: Converged environment: production Dec 2 03:17:49 localhost puppet-user[68225]: Run mode: user Dec 2 03:17:49 localhost puppet-user[68225]: Changes: Dec 2 03:17:49 localhost puppet-user[68225]: Total: 8 Dec 2 03:17:49 localhost puppet-user[68225]: Events: Dec 2 03:17:49 localhost puppet-user[68225]: Success: 8 Dec 2 03:17:49 localhost puppet-user[68225]: Total: 8 Dec 2 03:17:49 localhost puppet-user[68225]: Resources: Dec 2 03:17:49 localhost puppet-user[68225]: Restarted: 1 Dec 2 03:17:49 localhost puppet-user[68225]: Changed: 8 Dec 2 03:17:49 localhost puppet-user[68225]: Out of sync: 8 Dec 2 03:17:49 localhost puppet-user[68225]: Total: 19 Dec 2 03:17:49 localhost puppet-user[68225]: Time: Dec 2 03:17:49 localhost puppet-user[68225]: Filebucket: 0.00 Dec 2 03:17:49 localhost puppet-user[68225]: Schedule: 0.00 Dec 2 03:17:49 localhost puppet-user[68225]: Augeas: 0.01 Dec 2 03:17:49 localhost puppet-user[68225]: File: 0.06 Dec 2 03:17:49 localhost puppet-user[68225]: Config retrieval: 0.27 Dec 2 03:17:49 localhost puppet-user[68225]: Service: 1.19 Dec 2 03:17:49 localhost puppet-user[68225]: Transaction evaluation: 15.19 Dec 2 03:17:49 localhost puppet-user[68225]: Catalog application: 15.20 Dec 2 03:17:49 localhost puppet-user[68225]: Last run: 1764663469 Dec 2 03:17:49 localhost puppet-user[68225]: Exec: 5.05 Dec 2 03:17:49 localhost puppet-user[68225]: Package: 8.75 Dec 2 03:17:49 localhost puppet-user[68225]: Total: 15.21 Dec 2 03:17:49 localhost puppet-user[68225]: Version: Dec 2 03:17:49 localhost puppet-user[68225]: Config: 1764663454 Dec 2 03:17:49 localhost puppet-user[68225]: Puppet: 7.10.0 Dec 2 03:17:49 localhost ansible-async_wrapper.py[68206]: Module complete (68206) Dec 2 03:17:50 localhost ansible-async_wrapper.py[68205]: Done in kid B. Dec 2 03:17:51 localhost python3[69722]: ansible-ansible.legacy.async_status Invoked with jid=107953783535.68202 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:17:51 localhost python3[69738]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:17:52 localhost python3[69754]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:52 localhost python3[69804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:53 localhost python3[69822]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp92itetqj recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:17:53 localhost python3[69852]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:54 localhost python3[69955]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:17:55 localhost python3[69974]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:56 localhost python3[70006]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:17:56 localhost python3[70056]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:57 localhost python3[70074]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:57 localhost python3[70136]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:57 localhost python3[70154]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:58 localhost python3[70216]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:58 localhost python3[70234]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:17:59 localhost python3[70296]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:17:59 localhost python3[70314]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:00 localhost python3[70344]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:00 localhost systemd[1]: Reloading. Dec 2 03:18:00 localhost systemd-sysv-generator[70372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:00 localhost systemd-rc-local-generator[70369]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:00 localhost python3[70430]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:18:01 localhost python3[70448]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:01 localhost python3[70510]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:18:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:18:01 localhost systemd[1]: tmp-crun.qtjLhd.mount: Deactivated successfully. Dec 2 03:18:01 localhost podman[70528]: 2025-12-02 08:18:01.959517567 +0000 UTC m=+0.092621608 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:18:02 localhost python3[70529]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:02 localhost podman[70528]: 2025-12-02 08:18:02.140150762 +0000 UTC m=+0.273254823 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:18:02 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:18:02 localhost python3[70588]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:02 localhost systemd[1]: Reloading. Dec 2 03:18:02 localhost systemd-rc-local-generator[70610]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:02 localhost systemd-sysv-generator[70613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:02 localhost systemd[1]: Starting Create netns directory... Dec 2 03:18:02 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:18:02 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:18:03 localhost systemd[1]: Finished Create netns directory. Dec 2 03:18:03 localhost python3[70644]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:18:05 localhost python3[70702]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:18:05 localhost podman[70877]: 2025-12-02 08:18:05.939162467 +0000 UTC m=+0.070232637 container create 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, container_name=configure_cms_options, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:18:05 localhost podman[70906]: 2025-12-02 08:18:05.977773617 +0000 UTC m=+0.086209290 container create 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:18:06 localhost podman[70876]: 2025-12-02 08:18:05.899856528 +0000 UTC m=+0.031661258 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:06.003409717 +0000 UTC m=+0.127751441 container create 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0.scope. Dec 2 03:18:06 localhost podman[70877]: 2025-12-02 08:18:05.906581994 +0000 UTC m=+0.037652154 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.scope. Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:05.911687355 +0000 UTC m=+0.036029149 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fddcd6dd4df186203ff55efce1dca7750680c9de7878dc7d77dfefe109af9b62/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost podman[70906]: 2025-12-02 08:18:05.929438808 +0000 UTC m=+0.037874471 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe.scope. Dec 2 03:18:06 localhost podman[70878]: 2025-12-02 08:18:06.050397909 +0000 UTC m=+0.177622833 container create 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true) Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b94aabaeee0e41c77050836c47e281aafe3b0b49cec59de508354f5d2967adc/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b94aabaeee0e41c77050836c47e281aafe3b0b49cec59de508354f5d2967adc/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b94aabaeee0e41c77050836c47e281aafe3b0b49cec59de508354f5d2967adc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:18:06 localhost podman[70906]: 2025-12-02 08:18:06.06485581 +0000 UTC m=+0.173291503 container init 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:06.068886001 +0000 UTC m=+0.193227755 container init 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_libvirt_init_secret, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:06.076097141 +0000 UTC m=+0.200438865 container start 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:06.076233485 +0000 UTC m=+0.200575269 container attach 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, container_name=nova_libvirt_init_secret, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, architecture=x86_64) Dec 2 03:18:06 localhost podman[70877]: 2025-12-02 08:18:06.080391621 +0000 UTC m=+0.211461801 container init 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, release=1761123044, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.scope. Dec 2 03:18:06 localhost podman[70876]: 2025-12-02 08:18:06.088785403 +0000 UTC m=+0.220590103 container create 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:18:06 localhost podman[70906]: 2025-12-02 08:18:06.104794466 +0000 UTC m=+0.213230129 container start 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=72848ce4d815e5b4e89ff3e01c5f9f7e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost podman[70878]: 2025-12-02 08:18:06.014106004 +0000 UTC m=+0.141330978 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.scope. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5dc9262725001f2f73a799452ce705d444359a7e34fc5a93c05c8a39696c355/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a416db81901f96d6fd72f5969e70208d019cecbe75cef9d1ed7630b319da67/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost podman[70877]: 2025-12-02 08:18:06.140256449 +0000 UTC m=+0.271326609 container start 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, version=17.1.12, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:18:06 localhost podman[70877]: 2025-12-02 08:18:06.140955758 +0000 UTC m=+0.272025938 container attach 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, container_name=configure_cms_options, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:18:06 localhost podman[70878]: 2025-12-02 08:18:06.154057982 +0000 UTC m=+0.281282896 container init 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:18:06 localhost podman[70878]: 2025-12-02 08:18:06.184134425 +0000 UTC m=+0.311359339 container start 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:18:06 localhost podman[70876]: 2025-12-02 08:18:06.18866579 +0000 UTC m=+0.320470510 container init 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 2 03:18:06 localhost ovs-vsctl[71035]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:18:06 localhost systemd[1]: libpod-656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0.scope: Deactivated successfully. Dec 2 03:18:06 localhost podman[70876]: 2025-12-02 08:18:06.229973545 +0000 UTC m=+0.361778245 container start 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:18:06 localhost systemd[1]: libpod-5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe.scope: Deactivated successfully. Dec 2 03:18:06 localhost podman[70877]: 2025-12-02 08:18:06.230747246 +0000 UTC m=+0.361817426 container died 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=72848ce4d815e5b4e89ff3e01c5f9f7e --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 2 03:18:06 localhost podman[70969]: 2025-12-02 08:18:06.253751623 +0000 UTC m=+0.150633254 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:06 localhost podman[70881]: 2025-12-02 08:18:06.284943009 +0000 UTC m=+0.409284753 container died 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:18:06 localhost podman[71044]: 2025-12-02 08:18:06.385547865 +0000 UTC m=+0.148211117 container cleanup 656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, container_name=configure_cms_options, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:18:06 localhost systemd[1]: libpod-conmon-656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0.scope: Deactivated successfully. Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Dec 2 03:18:06 localhost podman[70969]: 2025-12-02 08:18:06.398519585 +0000 UTC m=+0.295401226 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:18:06 localhost podman[70969]: unhealthy Dec 2 03:18:06 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:06 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 03:18:06 localhost podman[71057]: 2025-12-02 08:18:06.437149526 +0000 UTC m=+0.190865980 container cleanup 5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:18:06 localhost systemd[1]: libpod-conmon-5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe.scope: Deactivated successfully. Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Dec 2 03:18:06 localhost podman[71015]: 2025-12-02 08:18:06.370293273 +0000 UTC m=+0.181361457 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:18:06 localhost podman[71039]: 2025-12-02 08:18:06.520575707 +0000 UTC m=+0.285340897 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, vcs-type=git) Dec 2 03:18:06 localhost podman[71039]: 2025-12-02 08:18:06.533829134 +0000 UTC m=+0.298594334 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Dec 2 03:18:06 localhost podman[71039]: unhealthy Dec 2 03:18:06 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:06 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed with result 'exit-code'. Dec 2 03:18:06 localhost podman[71015]: 2025-12-02 08:18:06.554478987 +0000 UTC m=+0.365547161 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z) Dec 2 03:18:06 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:18:06 localhost podman[71239]: 2025-12-02 08:18:06.666703646 +0000 UTC m=+0.072111359 container create 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, container_name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e.scope. Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost podman[71239]: 2025-12-02 08:18:06.623831539 +0000 UTC m=+0.029239282 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:06 localhost podman[71239]: 2025-12-02 08:18:06.728886899 +0000 UTC m=+0.134294622 container init 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:18:06 localhost podman[71239]: 2025-12-02 08:18:06.73686712 +0000 UTC m=+0.142274823 container start 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, container_name=setup_ovs_manager, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:18:06 localhost podman[71239]: 2025-12-02 08:18:06.736999764 +0000 UTC m=+0.142407467 container attach 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, container_name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:18:06 localhost podman[71265]: 2025-12-02 08:18:06.76212194 +0000 UTC m=+0.070352540 container create 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12) Dec 2 03:18:06 localhost systemd[1]: Started libpod-conmon-17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.scope. Dec 2 03:18:06 localhost systemd[1]: Started libcrun container. Dec 2 03:18:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed02a8eef27d7fad5076c16a3501516599cfd6963ae4f4d75e8f0b164242bc5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:06 localhost podman[71265]: 2025-12-02 08:18:06.719252283 +0000 UTC m=+0.027482883 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:18:06 localhost podman[71265]: 2025-12-02 08:18:06.848066332 +0000 UTC m=+0.156296922 container init 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:18:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:18:06 localhost podman[71265]: 2025-12-02 08:18:06.891761052 +0000 UTC m=+0.199991682 container start 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4) Dec 2 03:18:06 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:18:06 localhost systemd[1]: var-lib-containers-storage-overlay-0b94aabaeee0e41c77050836c47e281aafe3b0b49cec59de508354f5d2967adc-merged.mount: Deactivated successfully. Dec 2 03:18:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5417040510e8bae38145b491ecca5f879da783fbeaa8939a882be5d9417513fe-userdata-shm.mount: Deactivated successfully. Dec 2 03:18:06 localhost systemd[1]: var-lib-containers-storage-overlay-a7c14a8989e4d415fd166e88f713e89b4166ed5e691e2325b8968269ca1a9aa5-merged.mount: Deactivated successfully. Dec 2 03:18:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-656b9b116d635a442ed314ff9c3944ec3c499c0f90954d885e293495308ceba0-userdata-shm.mount: Deactivated successfully. Dec 2 03:18:06 localhost podman[71296]: 2025-12-02 08:18:06.964688773 +0000 UTC m=+0.070168266 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:18:07 localhost podman[71296]: 2025-12-02 08:18:07.125967632 +0000 UTC m=+0.231447145 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:18:07 localhost podman[71296]: unhealthy Dec 2 03:18:07 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:07 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Failed with result 'exit-code'. Dec 2 03:18:07 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Dec 2 03:18:09 localhost ovs-vsctl[71471]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Dec 2 03:18:09 localhost systemd[1]: libpod-8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e.scope: Deactivated successfully. Dec 2 03:18:09 localhost systemd[1]: libpod-8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e.scope: Consumed 2.929s CPU time. Dec 2 03:18:09 localhost podman[71239]: 2025-12-02 08:18:09.697208718 +0000 UTC m=+3.102616531 container died 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, container_name=setup_ovs_manager, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:18:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e-userdata-shm.mount: Deactivated successfully. Dec 2 03:18:09 localhost systemd[1]: var-lib-containers-storage-overlay-a423cc2ecc4b4a7a413eebe91da3e0f5986adaa9cffa0acabc604ad76a95339a-merged.mount: Deactivated successfully. Dec 2 03:18:09 localhost podman[71473]: 2025-12-02 08:18:09.797402813 +0000 UTC m=+0.082920618 container cleanup 8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:18:09 localhost systemd[1]: libpod-conmon-8b8f203b1106160e8add055392f6e9894b323cfd37e74c53036d6af648edb28e.scope: Deactivated successfully. Dec 2 03:18:09 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Dec 2 03:18:10 localhost podman[71586]: 2025-12-02 08:18:10.203044113 +0000 UTC m=+0.065827694 container create 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com) Dec 2 03:18:10 localhost podman[71585]: 2025-12-02 08:18:10.234405792 +0000 UTC m=+0.098581502 container create e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 2 03:18:10 localhost systemd[1]: Started libpod-conmon-1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.scope. Dec 2 03:18:10 localhost systemd[1]: Started libcrun container. Dec 2 03:18:10 localhost systemd[1]: Started libpod-conmon-e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.scope. Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a1af3edb87ae84c24194878020e22370aba8355c75888d8a0972cd3b1ac86c8/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a1af3edb87ae84c24194878020e22370aba8355c75888d8a0972cd3b1ac86c8/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a1af3edb87ae84c24194878020e22370aba8355c75888d8a0972cd3b1ac86c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost podman[71586]: 2025-12-02 08:18:10.165865323 +0000 UTC m=+0.028648924 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:10 localhost podman[71585]: 2025-12-02 08:18:10.178285007 +0000 UTC m=+0.042460787 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:10 localhost systemd[1]: Started libcrun container. Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2735d70b4229c33d88157dc663cc996128839f7744195fee819ab923e68e6b/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2735d70b4229c33d88157dc663cc996128839f7744195fee819ab923e68e6b/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2735d70b4229c33d88157dc663cc996128839f7744195fee819ab923e68e6b/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:18:10 localhost podman[71586]: 2025-12-02 08:18:10.300503264 +0000 UTC m=+0.163286835 container init 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:18:10 localhost podman[71585]: 2025-12-02 08:18:10.319715866 +0000 UTC m=+0.183891586 container init e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:18:10 localhost podman[71586]: 2025-12-02 08:18:10.344444862 +0000 UTC m=+0.207228453 container start 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 2 03:18:10 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d1544001d5773d0045aaf61439ef5e02 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:18:10 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:18:10 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:18:10 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:18:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:18:10 localhost podman[71585]: 2025-12-02 08:18:10.388538043 +0000 UTC m=+0.252713753 container start e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:18:10 localhost python3[70702]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 2 03:18:10 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:18:10 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:18:10 localhost podman[71628]: 2025-12-02 08:18:10.448808293 +0000 UTC m=+0.100900066 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, io.openshift.expose-services=, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:18:10 localhost podman[71647]: 2025-12-02 08:18:10.533736336 +0000 UTC m=+0.143654001 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:18:10 localhost systemd[71655]: Queued start job for default target Main User Target. Dec 2 03:18:10 localhost podman[71628]: 2025-12-02 08:18:10.562260006 +0000 UTC m=+0.214351669 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 2 03:18:10 localhost systemd[71655]: Created slice User Application Slice. Dec 2 03:18:10 localhost systemd[71655]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:18:10 localhost systemd[71655]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:18:10 localhost systemd[71655]: Reached target Paths. Dec 2 03:18:10 localhost systemd[71655]: Reached target Timers. Dec 2 03:18:10 localhost systemd[71655]: Starting D-Bus User Message Bus Socket... Dec 2 03:18:10 localhost systemd[71655]: Starting Create User's Volatile Files and Directories... Dec 2 03:18:10 localhost podman[71647]: 2025-12-02 08:18:10.572965973 +0000 UTC m=+0.182883648 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ovn-controller) Dec 2 03:18:10 localhost podman[71628]: unhealthy Dec 2 03:18:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:18:10 localhost podman[71647]: unhealthy Dec 2 03:18:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:18:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:18:10 localhost systemd[71655]: Listening on D-Bus User Message Bus Socket. Dec 2 03:18:10 localhost systemd[71655]: Reached target Sockets. Dec 2 03:18:10 localhost systemd[71655]: Finished Create User's Volatile Files and Directories. Dec 2 03:18:10 localhost systemd[71655]: Reached target Basic System. Dec 2 03:18:10 localhost systemd[71655]: Reached target Main User Target. Dec 2 03:18:10 localhost systemd[71655]: Startup finished in 148ms. Dec 2 03:18:10 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:18:10 localhost systemd[1]: Started Session c9 of User root. Dec 2 03:18:10 localhost systemd[1]: session-c9.scope: Deactivated successfully. Dec 2 03:18:10 localhost kernel: device br-int entered promiscuous mode Dec 2 03:18:10 localhost NetworkManager[5965]: [1764663490.6957] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Dec 2 03:18:10 localhost systemd-udevd[71739]: Network interface NamePolicy= disabled on kernel command line. Dec 2 03:18:10 localhost kernel: device genev_sys_6081 entered promiscuous mode Dec 2 03:18:10 localhost NetworkManager[5965]: [1764663490.7337] device (genev_sys_6081): carrier: link connected Dec 2 03:18:10 localhost NetworkManager[5965]: [1764663490.7343] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Dec 2 03:18:11 localhost python3[71761]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:11 localhost python3[71777]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:11 localhost python3[71793]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:11 localhost python3[71809]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71827]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71845]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:12 localhost python3[71861]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:12 localhost python3[71879]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71897]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71913]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71929]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:13 localhost python3[71945]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:18:14 localhost python3[72006]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:15 localhost python3[72035]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:15 localhost python3[72064]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:16 localhost python3[72093]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:16 localhost python3[72122]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:17 localhost python3[72151]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.054966-108778-16394698894446/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:17 localhost python3[72167]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:18:17 localhost systemd[1]: Reloading. Dec 2 03:18:17 localhost systemd-sysv-generator[72194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:17 localhost systemd-rc-local-generator[72191]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:18 localhost python3[72219]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:18:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:18:18 localhost systemd[1]: Reloading. Dec 2 03:18:18 localhost podman[72222]: 2025-12-02 08:18:18.962940798 +0000 UTC m=+0.088199944 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:18:19 localhost systemd-sysv-generator[72271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:19 localhost systemd-rc-local-generator[72268]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:19 localhost podman[72221]: 2025-12-02 08:18:19.044722984 +0000 UTC m=+0.168636773 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:18:19 localhost podman[72222]: 2025-12-02 08:18:19.049695502 +0000 UTC m=+0.174954698 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:19 localhost podman[72221]: 2025-12-02 08:18:19.104242314 +0000 UTC m=+0.228156103 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, distribution-scope=public, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc.) Dec 2 03:18:19 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:18:19 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:18:19 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 03:18:19 localhost tripleo-start-podman-container[72295]: Creating additional drop-in dependency for "ceilometer_agent_compute" (4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be) Dec 2 03:18:19 localhost systemd[1]: Reloading. Dec 2 03:18:19 localhost systemd-sysv-generator[72358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:19 localhost systemd-rc-local-generator[72355]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:19 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 03:18:20 localhost python3[72380]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:20 localhost systemd[1]: Reloading. Dec 2 03:18:20 localhost systemd-sysv-generator[72409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:20 localhost systemd-rc-local-generator[72406]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:20 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:18:20 localhost systemd[71655]: Activating special unit Exit the Session... Dec 2 03:18:20 localhost systemd[71655]: Stopped target Main User Target. Dec 2 03:18:20 localhost systemd[71655]: Stopped target Basic System. Dec 2 03:18:20 localhost systemd[71655]: Stopped target Paths. Dec 2 03:18:20 localhost systemd[71655]: Stopped target Sockets. Dec 2 03:18:20 localhost systemd[71655]: Stopped target Timers. Dec 2 03:18:20 localhost systemd[71655]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:18:20 localhost systemd[71655]: Closed D-Bus User Message Bus Socket. Dec 2 03:18:20 localhost systemd[71655]: Stopped Create User's Volatile Files and Directories. Dec 2 03:18:20 localhost systemd[71655]: Removed slice User Application Slice. Dec 2 03:18:20 localhost systemd[71655]: Reached target Shutdown. Dec 2 03:18:20 localhost systemd[71655]: Finished Exit the Session. Dec 2 03:18:20 localhost systemd[71655]: Reached target Exit the Session. Dec 2 03:18:20 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Dec 2 03:18:20 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:18:20 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:18:20 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:18:20 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:18:20 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:18:20 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:18:20 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:18:20 localhost systemd[1]: Started ceilometer_agent_ipmi container. Dec 2 03:18:21 localhost python3[72451]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:21 localhost systemd[1]: Reloading. Dec 2 03:18:21 localhost systemd-rc-local-generator[72474]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:21 localhost systemd-sysv-generator[72480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:21 localhost systemd[1]: Starting logrotate_crond container... Dec 2 03:18:21 localhost systemd[1]: Started logrotate_crond container. Dec 2 03:18:22 localhost python3[72517]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:23 localhost systemd[1]: Reloading. Dec 2 03:18:23 localhost systemd-rc-local-generator[72543]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:23 localhost systemd-sysv-generator[72547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:23 localhost systemd[1]: Starting nova_migration_target container... Dec 2 03:18:23 localhost systemd[1]: Started nova_migration_target container. Dec 2 03:18:24 localhost python3[72585]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:24 localhost systemd[1]: Reloading. Dec 2 03:18:24 localhost systemd-sysv-generator[72617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:24 localhost systemd-rc-local-generator[72612]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:25 localhost systemd[1]: Starting ovn_controller container... Dec 2 03:18:25 localhost tripleo-start-podman-container[72625]: Creating additional drop-in dependency for "ovn_controller" (e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b) Dec 2 03:18:25 localhost systemd[1]: Reloading. Dec 2 03:18:25 localhost systemd-sysv-generator[72682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:25 localhost systemd-rc-local-generator[72677]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:25 localhost systemd[1]: Started ovn_controller container. Dec 2 03:18:26 localhost python3[72709]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:18:26 localhost systemd[1]: Reloading. Dec 2 03:18:26 localhost systemd-rc-local-generator[72734]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:18:26 localhost systemd-sysv-generator[72739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:18:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:18:26 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 2 03:18:26 localhost systemd[1]: Started ovn_metadata_agent container. Dec 2 03:18:27 localhost python3[72791]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:29 localhost python3[72913]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005541913 step=4 update_config_hash_only=False Dec 2 03:18:29 localhost python3[72929]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:18:30 localhost python3[72945]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:18:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:18:32 localhost podman[73022]: 2025-12-02 08:18:32.428117402 +0000 UTC m=+0.078308111 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr) Dec 2 03:18:32 localhost podman[73022]: 2025-12-02 08:18:32.62040788 +0000 UTC m=+0.270598569 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1) Dec 2 03:18:32 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:18:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:18:37 localhost podman[73052]: 2025-12-02 08:18:37.450229547 +0000 UTC m=+0.091252029 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=logrotate_crond, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:18:37 localhost podman[73052]: 2025-12-02 08:18:37.458570928 +0000 UTC m=+0.099593480 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 2 03:18:37 localhost podman[73053]: 2025-12-02 08:18:37.496646014 +0000 UTC m=+0.137832290 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:18:37 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:18:37 localhost systemd[1]: tmp-crun.GVqAEm.mount: Deactivated successfully. Dec 2 03:18:37 localhost podman[73054]: 2025-12-02 08:18:37.578574844 +0000 UTC m=+0.212296063 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.expose-services=) Dec 2 03:18:37 localhost podman[73054]: 2025-12-02 08:18:37.60801168 +0000 UTC m=+0.241732899 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, tcib_managed=true, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:18:37 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:18:37 localhost podman[73062]: 2025-12-02 08:18:37.662682424 +0000 UTC m=+0.292810904 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:18:37 localhost podman[73062]: 2025-12-02 08:18:37.711547239 +0000 UTC m=+0.341675729 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:18:37 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:18:37 localhost podman[73053]: 2025-12-02 08:18:37.929214309 +0000 UTC m=+0.570400505 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.41.4) Dec 2 03:18:37 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:18:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:18:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:18:41 localhost podman[73146]: 2025-12-02 08:18:41.441279286 +0000 UTC m=+0.079720750 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:18:41 localhost systemd[1]: tmp-crun.gL0p5M.mount: Deactivated successfully. Dec 2 03:18:41 localhost podman[73145]: 2025-12-02 08:18:41.49197417 +0000 UTC m=+0.130551967 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4) Dec 2 03:18:41 localhost podman[73146]: 2025-12-02 08:18:41.517510688 +0000 UTC m=+0.155952182 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com) Dec 2 03:18:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:18:41 localhost podman[73145]: 2025-12-02 08:18:41.550982115 +0000 UTC m=+0.189559892 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true) Dec 2 03:18:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:18:48 localhost snmpd[69635]: empty variable list in _query Dec 2 03:18:48 localhost snmpd[69635]: empty variable list in _query Dec 2 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:18:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:18:49 localhost podman[73194]: 2025-12-02 08:18:49.432268316 +0000 UTC m=+0.074094205 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, container_name=iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Dec 2 03:18:49 localhost podman[73194]: 2025-12-02 08:18:49.446325735 +0000 UTC m=+0.088151634 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 03:18:49 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:18:49 localhost podman[73193]: 2025-12-02 08:18:49.48369709 +0000 UTC m=+0.124293524 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, architecture=x86_64) Dec 2 03:18:49 localhost podman[73193]: 2025-12-02 08:18:49.490598391 +0000 UTC m=+0.131194895 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3) Dec 2 03:18:49 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:19:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:19:03 localhost systemd[1]: tmp-crun.0OCfyl.mount: Deactivated successfully. Dec 2 03:19:03 localhost podman[73232]: 2025-12-02 08:19:03.439091604 +0000 UTC m=+0.084254282 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:19:03 localhost podman[73232]: 2025-12-02 08:19:03.623937287 +0000 UTC m=+0.269099945 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:19:03 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:19:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:19:08 localhost podman[73263]: 2025-12-02 08:19:08.450898646 +0000 UTC m=+0.085357293 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1) Dec 2 03:19:08 localhost podman[73262]: 2025-12-02 08:19:08.498356849 +0000 UTC m=+0.135398007 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 2 03:19:08 localhost podman[73265]: 2025-12-02 08:19:08.552291681 +0000 UTC m=+0.181447471 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com) Dec 2 03:19:08 localhost podman[73265]: 2025-12-02 08:19:08.587958067 +0000 UTC m=+0.217113787 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:19:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:19:08 localhost podman[73264]: 2025-12-02 08:19:08.613657998 +0000 UTC m=+0.243679002 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:19:08 localhost podman[73262]: 2025-12-02 08:19:08.635109191 +0000 UTC m=+0.272150429 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20251118.1, distribution-scope=public) Dec 2 03:19:08 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:19:08 localhost podman[73264]: 2025-12-02 08:19:08.666290094 +0000 UTC m=+0.296311128 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:19:08 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:19:08 localhost podman[73263]: 2025-12-02 08:19:08.822907356 +0000 UTC m=+0.457365953 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:19:08 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:19:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:19:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:19:12 localhost podman[73357]: 2025-12-02 08:19:12.477242417 +0000 UTC m=+0.068349192 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:19:12 localhost podman[73358]: 2025-12-02 08:19:12.486736379 +0000 UTC m=+0.076340002 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-type=git, container_name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:19:12 localhost podman[73358]: 2025-12-02 08:19:12.509652144 +0000 UTC m=+0.099255777 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:19:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:19:12 localhost podman[73357]: 2025-12-02 08:19:12.545189406 +0000 UTC m=+0.136296161 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:19:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:19:20 localhost systemd[1]: tmp-crun.J6tsXh.mount: Deactivated successfully. Dec 2 03:19:20 localhost podman[73406]: 2025-12-02 08:19:20.451008955 +0000 UTC m=+0.084562850 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:19:20 localhost podman[73406]: 2025-12-02 08:19:20.466037361 +0000 UTC m=+0.099591316 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=) Dec 2 03:19:20 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:19:20 localhost podman[73405]: 2025-12-02 08:19:20.555881297 +0000 UTC m=+0.188622440 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:19:20 localhost podman[73405]: 2025-12-02 08:19:20.596261133 +0000 UTC m=+0.229002286 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:19:20 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:19:33 localhost podman[73546]: 2025-12-02 08:19:33.46885287 +0000 UTC m=+0.090064903 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.expose-services=) Dec 2 03:19:33 localhost podman[73546]: 2025-12-02 08:19:33.574247636 +0000 UTC m=+0.195459699 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, release=1763362218) Dec 2 03:19:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:19:33 localhost podman[73593]: 2025-12-02 08:19:33.75191193 +0000 UTC m=+0.083235233 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:19:33 localhost podman[73593]: 2025-12-02 08:19:33.938908043 +0000 UTC m=+0.270231396 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., distribution-scope=public) Dec 2 03:19:33 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:19:39 localhost systemd[1]: tmp-crun.fGqmJh.mount: Deactivated successfully. Dec 2 03:19:39 localhost systemd[1]: tmp-crun.PPpyBa.mount: Deactivated successfully. Dec 2 03:19:39 localhost podman[73721]: 2025-12-02 08:19:39.512824736 +0000 UTC m=+0.144226771 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:19:39 localhost podman[73733]: 2025-12-02 08:19:39.473844248 +0000 UTC m=+0.094728702 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:19:39 localhost podman[73722]: 2025-12-02 08:19:39.540716208 +0000 UTC m=+0.163598558 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1) Dec 2 03:19:39 localhost podman[73733]: 2025-12-02 08:19:39.55311364 +0000 UTC m=+0.173998054 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:19:39 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:19:39 localhost podman[73722]: 2025-12-02 08:19:39.595245176 +0000 UTC m=+0.218127546 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 2 03:19:39 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:19:39 localhost podman[73720]: 2025-12-02 08:19:39.557175443 +0000 UTC m=+0.191445487 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:19:39 localhost podman[73720]: 2025-12-02 08:19:39.640002624 +0000 UTC m=+0.274272658 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, vcs-type=git, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:19:39 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:19:39 localhost podman[73721]: 2025-12-02 08:19:39.86726174 +0000 UTC m=+0.498663725 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:19:39 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:19:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:19:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:19:43 localhost podman[73811]: 2025-12-02 08:19:43.438299817 +0000 UTC m=+0.076783815 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:19:43 localhost podman[73811]: 2025-12-02 08:19:43.475128005 +0000 UTC m=+0.113611953 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:19:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:19:43 localhost podman[73812]: 2025-12-02 08:19:43.495280263 +0000 UTC m=+0.133546255 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:19:43 localhost podman[73812]: 2025-12-02 08:19:43.518829685 +0000 UTC m=+0.157095667 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:19:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:19:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:19:51 localhost systemd[1]: tmp-crun.IGoZUe.mount: Deactivated successfully. Dec 2 03:19:51 localhost podman[73859]: 2025-12-02 08:19:51.50407197 +0000 UTC m=+0.102102894 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1761123044, architecture=x86_64, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4) Dec 2 03:19:51 localhost podman[73858]: 2025-12-02 08:19:51.464806745 +0000 UTC m=+0.104379128 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Dec 2 03:19:51 localhost podman[73859]: 2025-12-02 08:19:51.535459289 +0000 UTC m=+0.133490203 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, architecture=x86_64, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:19:51 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:19:51 localhost podman[73858]: 2025-12-02 08:19:51.545399474 +0000 UTC m=+0.184971847 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public) Dec 2 03:19:51 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:20:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:20:04 localhost podman[73897]: 2025-12-02 08:20:04.434501627 +0000 UTC m=+0.079043787 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git) Dec 2 03:20:04 localhost podman[73897]: 2025-12-02 08:20:04.637534103 +0000 UTC m=+0.282076263 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:20:04 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:20:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:20:10 localhost podman[73925]: 2025-12-02 08:20:10.443829754 +0000 UTC m=+0.084792597 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, container_name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4) Dec 2 03:20:10 localhost systemd[1]: tmp-crun.NbERkn.mount: Deactivated successfully. Dec 2 03:20:10 localhost podman[73926]: 2025-12-02 08:20:10.453244514 +0000 UTC m=+0.086977377 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step4) Dec 2 03:20:10 localhost systemd[1]: tmp-crun.ax10b0.mount: Deactivated successfully. Dec 2 03:20:10 localhost podman[73928]: 2025-12-02 08:20:10.508505943 +0000 UTC m=+0.139129950 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:20:10 localhost podman[73925]: 2025-12-02 08:20:10.534950265 +0000 UTC m=+0.175913058 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:20:10 localhost podman[73927]: 2025-12-02 08:20:10.490792523 +0000 UTC m=+0.124657480 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, release=1761123044, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:20:10 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:20:10 localhost podman[73928]: 2025-12-02 08:20:10.556036408 +0000 UTC m=+0.186660445 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z) Dec 2 03:20:10 localhost podman[73927]: 2025-12-02 08:20:10.573880721 +0000 UTC m=+0.207745638 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:20:10 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:20:10 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:20:10 localhost podman[73926]: 2025-12-02 08:20:10.807744671 +0000 UTC m=+0.441477584 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:20:10 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:20:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:20:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:20:14 localhost podman[74015]: 2025-12-02 08:20:14.441857041 +0000 UTC m=+0.082918324 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:20:14 localhost podman[74016]: 2025-12-02 08:20:14.489420977 +0000 UTC m=+0.127448716 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:20:14 localhost podman[74015]: 2025-12-02 08:20:14.495505635 +0000 UTC m=+0.136566958 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:20:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:20:14 localhost podman[74016]: 2025-12-02 08:20:14.533279511 +0000 UTC m=+0.171307320 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:20:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:20:22 localhost podman[74064]: 2025-12-02 08:20:22.420753323 +0000 UTC m=+0.062394227 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:20:22 localhost systemd[1]: tmp-crun.zWsRgX.mount: Deactivated successfully. Dec 2 03:20:22 localhost podman[74063]: 2025-12-02 08:20:22.491848841 +0000 UTC m=+0.131898100 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:20:22 localhost podman[74064]: 2025-12-02 08:20:22.510694732 +0000 UTC m=+0.152335626 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true) Dec 2 03:20:22 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:20:22 localhost podman[74063]: 2025-12-02 08:20:22.531147658 +0000 UTC m=+0.171196907 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:20:22 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:20:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:20:35 localhost podman[74101]: 2025-12-02 08:20:35.437880089 +0000 UTC m=+0.076836696 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:20:35 localhost podman[74101]: 2025-12-02 08:20:35.665146945 +0000 UTC m=+0.304103542 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:20:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:20:41 localhost podman[74208]: 2025-12-02 08:20:41.458019265 +0000 UTC m=+0.092264233 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target) Dec 2 03:20:41 localhost podman[74207]: 2025-12-02 08:20:41.499845182 +0000 UTC m=+0.135672143 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:20:41 localhost podman[74207]: 2025-12-02 08:20:41.511288509 +0000 UTC m=+0.147115550 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Dec 2 03:20:41 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:20:41 localhost podman[74209]: 2025-12-02 08:20:41.552931801 +0000 UTC m=+0.185210505 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:20:41 localhost podman[74210]: 2025-12-02 08:20:41.620584042 +0000 UTC m=+0.248116844 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:20:41 localhost podman[74209]: 2025-12-02 08:20:41.633468639 +0000 UTC m=+0.265747353 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:20:41 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:20:41 localhost podman[74210]: 2025-12-02 08:20:41.647512127 +0000 UTC m=+0.275044949 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:20:41 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:20:41 localhost podman[74208]: 2025-12-02 08:20:41.794857583 +0000 UTC m=+0.429102521 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12) Dec 2 03:20:41 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:20:42 localhost systemd[1]: tmp-crun.CwrRNC.mount: Deactivated successfully. Dec 2 03:20:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:20:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:20:45 localhost podman[74295]: 2025-12-02 08:20:45.426488096 +0000 UTC m=+0.068091104 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:20:45 localhost systemd[1]: tmp-crun.ofKjDb.mount: Deactivated successfully. Dec 2 03:20:45 localhost podman[74294]: 2025-12-02 08:20:45.453301357 +0000 UTC m=+0.092096888 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:20:45 localhost podman[74295]: 2025-12-02 08:20:45.506965851 +0000 UTC m=+0.148568859 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 2 03:20:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:20:45 localhost podman[74294]: 2025-12-02 08:20:45.563203257 +0000 UTC m=+0.201998778 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 2 03:20:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:20:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:20:53 localhost recover_tripleo_nova_virtqemud[74345]: 62312 Dec 2 03:20:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:20:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:20:53 localhost podman[74343]: 2025-12-02 08:20:53.423633491 +0000 UTC m=+0.061596484 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:20:53 localhost podman[74342]: 2025-12-02 08:20:53.436056625 +0000 UTC m=+0.076805105 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:20:53 localhost podman[74343]: 2025-12-02 08:20:53.439894651 +0000 UTC m=+0.077857664 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 2 03:20:53 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:20:53 localhost podman[74342]: 2025-12-02 08:20:53.494684387 +0000 UTC m=+0.135432937 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:20:53 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:21:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:21:06 localhost podman[74382]: 2025-12-02 08:21:06.446582318 +0000 UTC m=+0.085237349 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:21:06 localhost podman[74382]: 2025-12-02 08:21:06.634076014 +0000 UTC m=+0.272731005 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:21:06 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:21:12 localhost systemd[1]: tmp-crun.7V6GiO.mount: Deactivated successfully. Dec 2 03:21:12 localhost podman[74410]: 2025-12-02 08:21:12.448304525 +0000 UTC m=+0.086270348 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 2 03:21:12 localhost podman[74411]: 2025-12-02 08:21:12.48935801 +0000 UTC m=+0.123144678 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git) Dec 2 03:21:12 localhost podman[74417]: 2025-12-02 08:21:12.495184251 +0000 UTC m=+0.120867425 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:12 localhost podman[74412]: 2025-12-02 08:21:12.424763103 +0000 UTC m=+0.058924601 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Dec 2 03:21:12 localhost podman[74417]: 2025-12-02 08:21:12.538068348 +0000 UTC m=+0.163751522 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:12 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:21:12 localhost podman[74412]: 2025-12-02 08:21:12.559972063 +0000 UTC m=+0.194133581 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_compute, version=17.1.12) Dec 2 03:21:12 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:21:12 localhost podman[74410]: 2025-12-02 08:21:12.582838586 +0000 UTC m=+0.220804409 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, distribution-scope=public, name=rhosp17/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:21:12 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:21:12 localhost podman[74411]: 2025-12-02 08:21:12.828835771 +0000 UTC m=+0.462622419 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 03:21:12 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:21:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:21:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:21:16 localhost podman[74553]: 2025-12-02 08:21:16.137794738 +0000 UTC m=+0.082753061 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:21:16 localhost podman[74551]: 2025-12-02 08:21:16.176836847 +0000 UTC m=+0.121889282 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:21:16 localhost python3[74552]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:16 localhost podman[74553]: 2025-12-02 08:21:16.210487979 +0000 UTC m=+0.155446262 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public) Dec 2 03:21:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:21:16 localhost podman[74551]: 2025-12-02 08:21:16.229952396 +0000 UTC m=+0.175004851 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:21:16 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:21:16 localhost python3[74641]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663675.8658788-113047-29496742702015/source _original_basename=tmpe817b0h8 follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:17 localhost python3[74671]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:19 localhost ansible-async_wrapper.py[74843]: Invoked with 943560368233 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663678.78743-113240-203508478189784/AnsiballZ_command.py _ Dec 2 03:21:19 localhost ansible-async_wrapper.py[74846]: Starting module and watcher Dec 2 03:21:19 localhost ansible-async_wrapper.py[74846]: Start watching 74847 (3600) Dec 2 03:21:19 localhost ansible-async_wrapper.py[74847]: Start module (74847) Dec 2 03:21:19 localhost ansible-async_wrapper.py[74843]: Return async_wrapper task started. Dec 2 03:21:19 localhost python3[74867]: ansible-ansible.legacy.async_status Invoked with jid=943560368233.74843 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:21:23 localhost puppet-user[74852]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 2 03:21:23 localhost puppet-user[74852]: (file: /etc/puppet/hiera.yaml) Dec 2 03:21:23 localhost puppet-user[74852]: Warning: Undefined variable '::deploy_config_name'; Dec 2 03:21:23 localhost puppet-user[74852]: (file & line not available) Dec 2 03:21:23 localhost puppet-user[74852]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 2 03:21:23 localhost puppet-user[74852]: (file & line not available) Dec 2 03:21:23 localhost puppet-user[74852]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74852]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74852]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74852]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74852]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 2 03:21:23 localhost puppet-user[74852]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 2 03:21:23 localhost puppet-user[74852]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 2 03:21:23 localhost puppet-user[74852]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 2 03:21:23 localhost puppet-user[74852]: Notice: Compiled catalog for np0005541913.localdomain in environment production in 0.22 seconds Dec 2 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:21:23 localhost puppet-user[74852]: Notice: Applied catalog in 0.34 seconds Dec 2 03:21:23 localhost puppet-user[74852]: Application: Dec 2 03:21:23 localhost puppet-user[74852]: Initial environment: production Dec 2 03:21:23 localhost puppet-user[74852]: Converged environment: production Dec 2 03:21:23 localhost puppet-user[74852]: Run mode: user Dec 2 03:21:23 localhost puppet-user[74852]: Changes: Dec 2 03:21:23 localhost puppet-user[74852]: Events: Dec 2 03:21:23 localhost puppet-user[74852]: Resources: Dec 2 03:21:23 localhost puppet-user[74852]: Total: 19 Dec 2 03:21:23 localhost puppet-user[74852]: Time: Dec 2 03:21:23 localhost puppet-user[74852]: Schedule: 0.00 Dec 2 03:21:23 localhost puppet-user[74852]: Package: 0.00 Dec 2 03:21:23 localhost puppet-user[74852]: Exec: 0.01 Dec 2 03:21:23 localhost puppet-user[74852]: Augeas: 0.01 Dec 2 03:21:23 localhost puppet-user[74852]: File: 0.02 Dec 2 03:21:23 localhost puppet-user[74852]: Service: 0.10 Dec 2 03:21:23 localhost puppet-user[74852]: Config retrieval: 0.28 Dec 2 03:21:23 localhost puppet-user[74852]: Transaction evaluation: 0.34 Dec 2 03:21:23 localhost puppet-user[74852]: Catalog application: 0.34 Dec 2 03:21:23 localhost puppet-user[74852]: Last run: 1764663683 Dec 2 03:21:23 localhost puppet-user[74852]: Filebucket: 0.00 Dec 2 03:21:23 localhost puppet-user[74852]: Total: 0.35 Dec 2 03:21:23 localhost puppet-user[74852]: Version: Dec 2 03:21:23 localhost puppet-user[74852]: Config: 1764663683 Dec 2 03:21:23 localhost puppet-user[74852]: Puppet: 7.10.0 Dec 2 03:21:23 localhost systemd[1]: tmp-crun.kJS4s8.mount: Deactivated successfully. Dec 2 03:21:23 localhost podman[74988]: 2025-12-02 08:21:23.695757284 +0000 UTC m=+0.084991212 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:21:23 localhost podman[74988]: 2025-12-02 08:21:23.709557946 +0000 UTC m=+0.098791934 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044) Dec 2 03:21:23 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:21:23 localhost ansible-async_wrapper.py[74847]: Module complete (74847) Dec 2 03:21:23 localhost podman[74987]: 2025-12-02 08:21:23.795791911 +0000 UTC m=+0.188235338 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, container_name=collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:21:23 localhost podman[74987]: 2025-12-02 08:21:23.832009943 +0000 UTC m=+0.224453310 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:21:23 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:21:24 localhost ansible-async_wrapper.py[74846]: Done in kid B. Dec 2 03:21:30 localhost python3[75042]: ansible-ansible.legacy.async_status Invoked with jid=943560368233.74843 mode=status _async_dir=/tmp/.ansible_async Dec 2 03:21:30 localhost python3[75058]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:21:31 localhost python3[75074]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:31 localhost python3[75124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:32 localhost python3[75142]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpas6n_xmd recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 2 03:21:32 localhost python3[75172]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:33 localhost python3[75277]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 2 03:21:34 localhost python3[75296]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:35 localhost python3[75328]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:21:35 localhost python3[75378]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:36 localhost python3[75396]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:36 localhost python3[75458]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:21:36 localhost podman[75476]: 2025-12-02 08:21:36.955724328 +0000 UTC m=+0.077823614 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:21:37 localhost python3[75477]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:37 localhost podman[75476]: 2025-12-02 08:21:37.219051912 +0000 UTC m=+0.341151188 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com) Dec 2 03:21:37 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:21:37 localhost python3[75598]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:37 localhost python3[75623]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:38 localhost python3[75711]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:38 localhost python3[75729]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:39 localhost python3[75774]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:21:39 localhost systemd[1]: Reloading. Dec 2 03:21:39 localhost systemd-sysv-generator[75802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:21:39 localhost systemd-rc-local-generator[75799]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:21:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:21:40 localhost python3[75860]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:40 localhost python3[75878]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:40 localhost python3[75940]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 2 03:21:41 localhost python3[75958]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:21:41 localhost python3[75988]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:21:41 localhost systemd[1]: Reloading. Dec 2 03:21:41 localhost systemd-rc-local-generator[76011]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:21:41 localhost systemd-sysv-generator[76015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:21:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:21:42 localhost systemd[1]: Starting Create netns directory... Dec 2 03:21:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 03:21:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 03:21:42 localhost systemd[1]: Finished Create netns directory. Dec 2 03:21:42 localhost python3[76045]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:21:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:21:42 localhost systemd[1]: tmp-crun.vjBZPD.mount: Deactivated successfully. Dec 2 03:21:43 localhost podman[76063]: 2025-12-02 08:21:43.030975348 +0000 UTC m=+0.126374017 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4) Dec 2 03:21:43 localhost podman[76070]: 2025-12-02 08:21:43.043650118 +0000 UTC m=+0.129602635 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:43 localhost podman[76064]: 2025-12-02 08:21:43.083769587 +0000 UTC m=+0.171810022 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:21:43 localhost podman[76070]: 2025-12-02 08:21:43.092854869 +0000 UTC m=+0.178807376 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:21:43 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:21:43 localhost podman[76064]: 2025-12-02 08:21:43.107901775 +0000 UTC m=+0.195942210 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:21:43 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:21:43 localhost systemd[1]: tmp-crun.ARtC1X.mount: Deactivated successfully. Dec 2 03:21:43 localhost podman[76062]: 2025-12-02 08:21:43.006687586 +0000 UTC m=+0.104474160 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, container_name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible) Dec 2 03:21:43 localhost podman[76062]: 2025-12-02 08:21:43.188954588 +0000 UTC m=+0.286741172 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:21:43 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:21:43 localhost podman[76063]: 2025-12-02 08:21:43.367039843 +0000 UTC m=+0.462438502 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:21:43 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:21:44 localhost python3[76195]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 2 03:21:44 localhost podman[76233]: 2025-12-02 08:21:44.951585347 +0000 UTC m=+0.065789601 container create 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 2 03:21:44 localhost systemd[1]: Started libpod-conmon-1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.scope. Dec 2 03:21:45 localhost systemd[1]: Started libcrun container. Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost podman[76233]: 2025-12-02 08:21:44.917271318 +0000 UTC m=+0.031475582 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:21:45 localhost podman[76233]: 2025-12-02 08:21:45.045574207 +0000 UTC m=+0.159778491 container init 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Dec 2 03:21:45 localhost systemd[1]: tmp-crun.zwBKVi.mount: Deactivated successfully. Dec 2 03:21:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:21:45 localhost podman[76233]: 2025-12-02 08:21:45.086401456 +0000 UTC m=+0.200605710 container start 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:21:45 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:21:45 localhost python3[76195]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:21:45 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:21:45 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:21:45 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:21:45 localhost podman[76255]: 2025-12-02 08:21:45.184083539 +0000 UTC m=+0.087587634 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true) Dec 2 03:21:45 localhost podman[76255]: 2025-12-02 08:21:45.244945772 +0000 UTC m=+0.148449907 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:21:45 localhost podman[76255]: unhealthy Dec 2 03:21:45 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:21:45 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:21:45 localhost systemd[76273]: Queued start job for default target Main User Target. Dec 2 03:21:45 localhost systemd[76273]: Created slice User Application Slice. Dec 2 03:21:45 localhost systemd[76273]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:21:45 localhost systemd[76273]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:21:45 localhost systemd[76273]: Reached target Paths. Dec 2 03:21:45 localhost systemd[76273]: Reached target Timers. Dec 2 03:21:45 localhost systemd[76273]: Starting D-Bus User Message Bus Socket... Dec 2 03:21:45 localhost systemd[76273]: Starting Create User's Volatile Files and Directories... Dec 2 03:21:45 localhost systemd[76273]: Listening on D-Bus User Message Bus Socket. Dec 2 03:21:45 localhost systemd[76273]: Reached target Sockets. Dec 2 03:21:45 localhost systemd[76273]: Finished Create User's Volatile Files and Directories. Dec 2 03:21:45 localhost systemd[76273]: Reached target Basic System. Dec 2 03:21:45 localhost systemd[76273]: Reached target Main User Target. Dec 2 03:21:45 localhost systemd[76273]: Startup finished in 137ms. Dec 2 03:21:45 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:21:45 localhost systemd[1]: Started Session c10 of User root. Dec 2 03:21:45 localhost systemd[1]: session-c10.scope: Deactivated successfully. Dec 2 03:21:45 localhost podman[76355]: 2025-12-02 08:21:45.569301635 +0000 UTC m=+0.091350478 container create bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:21:45 localhost podman[76355]: 2025-12-02 08:21:45.50621039 +0000 UTC m=+0.028259283 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:21:45 localhost systemd[1]: Started libpod-conmon-bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d.scope. Dec 2 03:21:45 localhost systemd[1]: Started libcrun container. Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c52eb2917af814f67bf9757f04611b4867e02cd94735e31ef932542a90a8de8/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c52eb2917af814f67bf9757f04611b4867e02cd94735e31ef932542a90a8de8/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 2 03:21:45 localhost podman[76355]: 2025-12-02 08:21:45.648568278 +0000 UTC m=+0.170617121 container init bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, tcib_managed=true, config_id=tripleo_step5, io.openshift.expose-services=) Dec 2 03:21:45 localhost podman[76355]: 2025-12-02 08:21:45.65586025 +0000 UTC m=+0.177909123 container start bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:21:45 localhost podman[76355]: 2025-12-02 08:21:45.656181429 +0000 UTC m=+0.178230312 container attach bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true) Dec 2 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:21:46 localhost systemd[1]: tmp-crun.YZYJQz.mount: Deactivated successfully. Dec 2 03:21:46 localhost podman[76380]: 2025-12-02 08:21:46.467381918 +0000 UTC m=+0.101396455 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:21:46 localhost podman[76380]: 2025-12-02 08:21:46.523071249 +0000 UTC m=+0.157085776 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12) Dec 2 03:21:46 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:21:46 localhost podman[76379]: 2025-12-02 08:21:46.526176505 +0000 UTC m=+0.160179192 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Dec 2 03:21:46 localhost podman[76379]: 2025-12-02 08:21:46.607974928 +0000 UTC m=+0.241977685 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:21:46 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:21:46 localhost systemd[1]: tmp-crun.U1n95X.mount: Deactivated successfully. Dec 2 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:21:54 localhost systemd[1]: tmp-crun.gzaz3Z.mount: Deactivated successfully. Dec 2 03:21:54 localhost podman[76425]: 2025-12-02 08:21:54.453941382 +0000 UTC m=+0.094289590 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:21:54 localhost podman[76425]: 2025-12-02 08:21:54.48709807 +0000 UTC m=+0.127446198 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-type=git, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:21:54 localhost podman[76426]: 2025-12-02 08:21:54.494278379 +0000 UTC m=+0.133452234 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Dec 2 03:21:54 localhost podman[76426]: 2025-12-02 08:21:54.526066448 +0000 UTC m=+0.165240303 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Dec 2 03:21:54 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:21:54 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 03:21:55 localhost systemd[76273]: Activating special unit Exit the Session... Dec 2 03:21:55 localhost systemd[76273]: Stopped target Main User Target. Dec 2 03:21:55 localhost systemd[76273]: Stopped target Basic System. Dec 2 03:21:55 localhost systemd[76273]: Stopped target Paths. Dec 2 03:21:55 localhost systemd[76273]: Stopped target Sockets. Dec 2 03:21:55 localhost systemd[76273]: Stopped target Timers. Dec 2 03:21:55 localhost systemd[76273]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:21:55 localhost systemd[76273]: Closed D-Bus User Message Bus Socket. Dec 2 03:21:55 localhost systemd[76273]: Stopped Create User's Volatile Files and Directories. Dec 2 03:21:55 localhost systemd[76273]: Removed slice User Application Slice. Dec 2 03:21:55 localhost systemd[76273]: Reached target Shutdown. Dec 2 03:21:55 localhost systemd[76273]: Finished Exit the Session. Dec 2 03:21:55 localhost systemd[76273]: Reached target Exit the Session. Dec 2 03:21:55 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 03:21:55 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 03:21:55 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 03:21:55 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 03:21:55 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 03:22:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:22:08 localhost podman[76468]: 2025-12-02 08:22:08.022835471 +0000 UTC m=+0.059718584 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:22:08 localhost podman[76468]: 2025-12-02 08:22:08.187853356 +0000 UTC m=+0.224736579 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:46Z) Dec 2 03:22:08 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:22:12 localhost systemd[1]: session-28.scope: Deactivated successfully. Dec 2 03:22:12 localhost systemd[1]: session-28.scope: Consumed 2.898s CPU time. Dec 2 03:22:12 localhost systemd-logind[757]: Session 28 logged out. Waiting for processes to exit. Dec 2 03:22:12 localhost systemd-logind[757]: Removed session 28. Dec 2 03:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:22:13 localhost podman[76496]: 2025-12-02 08:22:13.454159648 +0000 UTC m=+0.090360221 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:22:13 localhost podman[76496]: 2025-12-02 08:22:13.514133377 +0000 UTC m=+0.150333950 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:22:13 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:22:13 localhost podman[76495]: 2025-12-02 08:22:13.520565244 +0000 UTC m=+0.159287617 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:22:13 localhost podman[76495]: 2025-12-02 08:22:13.603031426 +0000 UTC m=+0.241753809 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond) Dec 2 03:22:13 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:22:13 localhost podman[76497]: 2025-12-02 08:22:13.614991136 +0000 UTC m=+0.243668081 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Dec 2 03:22:13 localhost podman[76497]: 2025-12-02 08:22:13.666136531 +0000 UTC m=+0.294813486 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team) Dec 2 03:22:13 localhost podman[76537]: 2025-12-02 08:22:13.666399678 +0000 UTC m=+0.188122535 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:22:13 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:22:14 localhost podman[76537]: 2025-12-02 08:22:14.035146929 +0000 UTC m=+0.556869786 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:22:14 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:22:14 localhost systemd[1]: tmp-crun.f6U4Me.mount: Deactivated successfully. Dec 2 03:22:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:22:15 localhost systemd[1]: tmp-crun.6lYgM2.mount: Deactivated successfully. Dec 2 03:22:15 localhost podman[76587]: 2025-12-02 08:22:15.437116872 +0000 UTC m=+0.081649100 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Dec 2 03:22:15 localhost podman[76587]: 2025-12-02 08:22:15.494704675 +0000 UTC m=+0.139236863 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_compute, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:22:15 localhost podman[76587]: unhealthy Dec 2 03:22:15 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:22:15 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:22:17 localhost systemd[1]: tmp-crun.sO7Oi5.mount: Deactivated successfully. Dec 2 03:22:17 localhost podman[76610]: 2025-12-02 08:22:17.450272182 +0000 UTC m=+0.092701325 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, release=1761123044, architecture=x86_64) Dec 2 03:22:17 localhost podman[76611]: 2025-12-02 08:22:17.499492364 +0000 UTC m=+0.140825657 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Dec 2 03:22:17 localhost podman[76610]: 2025-12-02 08:22:17.552879401 +0000 UTC m=+0.195308574 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:22:17 localhost podman[76611]: 2025-12-02 08:22:17.569132 +0000 UTC m=+0.210465283 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Dec 2 03:22:17 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:22:17 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:22:25 localhost systemd[1]: tmp-crun.wmzfwv.mount: Deactivated successfully. Dec 2 03:22:25 localhost podman[76654]: 2025-12-02 08:22:25.444331684 +0000 UTC m=+0.088176241 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:22:25 localhost podman[76654]: 2025-12-02 08:22:25.456025967 +0000 UTC m=+0.099870564 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 03:22:25 localhost systemd[1]: tmp-crun.dYGM9c.mount: Deactivated successfully. Dec 2 03:22:25 localhost podman[76655]: 2025-12-02 08:22:25.494962454 +0000 UTC m=+0.131963631 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git) Dec 2 03:22:25 localhost podman[76655]: 2025-12-02 08:22:25.505991989 +0000 UTC m=+0.142993196 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:22:25 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:22:25 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:22:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:22:38 localhost podman[76693]: 2025-12-02 08:22:38.442105862 +0000 UTC m=+0.075050466 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:22:38 localhost podman[76693]: 2025-12-02 08:22:38.743265214 +0000 UTC m=+0.376209898 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:22:38 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:22:44 localhost systemd[1]: tmp-crun.48ndkU.mount: Deactivated successfully. Dec 2 03:22:44 localhost podman[76800]: 2025-12-02 08:22:44.423021363 +0000 UTC m=+0.062642124 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, tcib_managed=true) Dec 2 03:22:44 localhost podman[76797]: 2025-12-02 08:22:44.428474074 +0000 UTC m=+0.069335189 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:22:44 localhost podman[76798]: 2025-12-02 08:22:44.492719811 +0000 UTC m=+0.130000157 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:22:44 localhost podman[76797]: 2025-12-02 08:22:44.511963323 +0000 UTC m=+0.152824458 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z) Dec 2 03:22:44 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:22:44 localhost podman[76799]: 2025-12-02 08:22:44.475953957 +0000 UTC m=+0.116416311 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:22:44 localhost podman[76800]: 2025-12-02 08:22:44.527882174 +0000 UTC m=+0.167502935 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true) Dec 2 03:22:44 localhost podman[76799]: 2025-12-02 08:22:44.560214858 +0000 UTC m=+0.200677212 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4) Dec 2 03:22:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:22:44 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:22:44 localhost podman[76798]: 2025-12-02 08:22:44.863013934 +0000 UTC m=+0.500294350 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4) Dec 2 03:22:44 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:22:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:22:46 localhost podman[76895]: 2025-12-02 08:22:46.43569162 +0000 UTC m=+0.081391972 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:22:46 localhost podman[76895]: 2025-12-02 08:22:46.496377419 +0000 UTC m=+0.142077811 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z) Dec 2 03:22:46 localhost podman[76895]: unhealthy Dec 2 03:22:46 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:22:46 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:22:48 localhost podman[76917]: 2025-12-02 08:22:48.421104133 +0000 UTC m=+0.062446239 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:22:48 localhost podman[76918]: 2025-12-02 08:22:48.467715882 +0000 UTC m=+0.099187394 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public) Dec 2 03:22:48 localhost podman[76918]: 2025-12-02 08:22:48.513844228 +0000 UTC m=+0.145315690 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:22:48 localhost podman[76917]: 2025-12-02 08:22:48.520208134 +0000 UTC m=+0.161550250 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:22:48 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:22:48 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:22:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:22:56 localhost recover_tripleo_nova_virtqemud[76976]: 62312 Dec 2 03:22:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:22:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:22:56 localhost systemd[1]: tmp-crun.dIyW6n.mount: Deactivated successfully. Dec 2 03:22:56 localhost podman[76962]: 2025-12-02 08:22:56.451955711 +0000 UTC m=+0.091928834 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd) Dec 2 03:22:56 localhost podman[76962]: 2025-12-02 08:22:56.460846016 +0000 UTC m=+0.100819179 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true) Dec 2 03:22:56 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:22:56 localhost podman[76963]: 2025-12-02 08:22:56.503645601 +0000 UTC m=+0.141374212 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, version=17.1.12, vcs-type=git) Dec 2 03:22:56 localhost podman[76963]: 2025-12-02 08:22:56.535131832 +0000 UTC m=+0.172860423 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:22:56 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:23:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:23:09 localhost systemd[1]: tmp-crun.zIixDL.mount: Deactivated successfully. Dec 2 03:23:09 localhost podman[77001]: 2025-12-02 08:23:09.452287311 +0000 UTC m=+0.091463971 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:23:09 localhost podman[77001]: 2025-12-02 08:23:09.645361982 +0000 UTC m=+0.284538723 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:23:09 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:23:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:23:15 localhost podman[77030]: 2025-12-02 08:23:15.456548598 +0000 UTC m=+0.090031762 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:23:15 localhost podman[77030]: 2025-12-02 08:23:15.493673706 +0000 UTC m=+0.127156780 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:23:15 localhost podman[77031]: 2025-12-02 08:23:15.506769047 +0000 UTC m=+0.139394187 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 2 03:23:15 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:23:15 localhost podman[77033]: 2025-12-02 08:23:15.554806676 +0000 UTC m=+0.183774894 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:23:15 localhost podman[77033]: 2025-12-02 08:23:15.589520777 +0000 UTC m=+0.218488995 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 2 03:23:15 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:23:15 localhost podman[77032]: 2025-12-02 08:23:15.658840175 +0000 UTC m=+0.287945297 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 2 03:23:15 localhost podman[77032]: 2025-12-02 08:23:15.710106202 +0000 UTC m=+0.339211314 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:23:15 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:23:15 localhost podman[77031]: 2025-12-02 08:23:15.855085673 +0000 UTC m=+0.487710833 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container) Dec 2 03:23:15 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:23:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:23:17 localhost podman[77125]: 2025-12-02 08:23:17.427336496 +0000 UTC m=+0.068896287 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step5) Dec 2 03:23:17 localhost podman[77125]: 2025-12-02 08:23:17.480397214 +0000 UTC m=+0.121957035 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 2 03:23:17 localhost podman[77125]: unhealthy Dec 2 03:23:17 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:23:17 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:23:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:23:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:23:19 localhost podman[77147]: 2025-12-02 08:23:19.43883894 +0000 UTC m=+0.081588903 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Dec 2 03:23:19 localhost systemd[1]: tmp-crun.fBo5nI.mount: Deactivated successfully. Dec 2 03:23:19 localhost podman[77148]: 2025-12-02 08:23:19.50224621 +0000 UTC m=+0.142024313 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:23:19 localhost podman[77147]: 2025-12-02 08:23:19.50925685 +0000 UTC m=+0.152006773 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 2 03:23:19 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:23:19 localhost podman[77148]: 2025-12-02 08:23:19.525672423 +0000 UTC m=+0.165450586 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:23:19 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:23:27 localhost podman[77195]: 2025-12-02 08:23:27.432501957 +0000 UTC m=+0.077359448 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:23:27 localhost systemd[1]: tmp-crun.CWbXkE.mount: Deactivated successfully. Dec 2 03:23:27 localhost podman[77196]: 2025-12-02 08:23:27.493229647 +0000 UTC m=+0.132280271 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3) Dec 2 03:23:27 localhost podman[77196]: 2025-12-02 08:23:27.501336325 +0000 UTC m=+0.140386919 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:23:27 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:23:27 localhost podman[77195]: 2025-12-02 08:23:27.522303531 +0000 UTC m=+0.167161022 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Dec 2 03:23:27 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:23:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:23:40 localhost systemd[1]: tmp-crun.h2Jzrn.mount: Deactivated successfully. Dec 2 03:23:40 localhost podman[77248]: 2025-12-02 08:23:40.432112897 +0000 UTC m=+0.072186790 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, architecture=x86_64) Dec 2 03:23:40 localhost podman[77248]: 2025-12-02 08:23:40.597919982 +0000 UTC m=+0.237993885 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:23:40 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:23:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:23:46 localhost podman[77341]: 2025-12-02 08:23:46.45626782 +0000 UTC m=+0.091846420 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:23:46 localhost podman[77341]: 2025-12-02 08:23:46.50813488 +0000 UTC m=+0.143713500 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:23:46 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:23:46 localhost podman[77340]: 2025-12-02 08:23:46.422235041 +0000 UTC m=+0.063926336 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 2 03:23:46 localhost podman[77342]: 2025-12-02 08:23:46.554257944 +0000 UTC m=+0.185282061 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=) Dec 2 03:23:46 localhost podman[77339]: 2025-12-02 08:23:46.514512082 +0000 UTC m=+0.153925995 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:23:46 localhost podman[77342]: 2025-12-02 08:23:46.573973166 +0000 UTC m=+0.204997263 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 2 03:23:46 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:23:46 localhost podman[77339]: 2025-12-02 08:23:46.598996342 +0000 UTC m=+0.238410225 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron) Dec 2 03:23:46 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:23:46 localhost podman[77340]: 2025-12-02 08:23:46.773597994 +0000 UTC m=+0.415289259 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:23:46 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:23:47 localhost systemd[1]: tmp-crun.ApqL8s.mount: Deactivated successfully. Dec 2 03:23:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:23:48 localhost podman[77431]: 2025-12-02 08:23:48.443688018 +0000 UTC m=+0.082907179 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:23:48 localhost podman[77431]: 2025-12-02 08:23:48.514056137 +0000 UTC m=+0.153275248 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:23:48 localhost podman[77431]: unhealthy Dec 2 03:23:48 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:23:48 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:23:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:23:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:23:50 localhost podman[77451]: 2025-12-02 08:23:50.439396018 +0000 UTC m=+0.074198943 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 2 03:23:50 localhost podman[77452]: 2025-12-02 08:23:50.503005015 +0000 UTC m=+0.133561735 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=) Dec 2 03:23:50 localhost podman[77451]: 2025-12-02 08:23:50.527376202 +0000 UTC m=+0.162179087 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:23:50 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:23:50 localhost podman[77452]: 2025-12-02 08:23:50.54914954 +0000 UTC m=+0.179706250 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:23:50 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:23:58 localhost podman[77499]: 2025-12-02 08:23:58.442540851 +0000 UTC m=+0.078300964 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:23:58 localhost podman[77499]: 2025-12-02 08:23:58.477372601 +0000 UTC m=+0.113132674 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:23:58 localhost systemd[1]: tmp-crun.WN5DBv.mount: Deactivated successfully. Dec 2 03:23:58 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:23:58 localhost podman[77500]: 2025-12-02 08:23:58.491831391 +0000 UTC m=+0.125517788 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:23:58 localhost podman[77500]: 2025-12-02 08:23:58.501926804 +0000 UTC m=+0.135613211 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:23:58 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:24:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:24:11 localhost systemd[1]: tmp-crun.HLMax1.mount: Deactivated successfully. Dec 2 03:24:11 localhost podman[77537]: 2025-12-02 08:24:11.419844267 +0000 UTC m=+0.065270472 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:24:11 localhost podman[77537]: 2025-12-02 08:24:11.643976396 +0000 UTC m=+0.289402521 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, container_name=metrics_qdr, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-type=git) Dec 2 03:24:11 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:24:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:24:17 localhost podman[77567]: 2025-12-02 08:24:17.460901316 +0000 UTC m=+0.091062780 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:24:17 localhost podman[77567]: 2025-12-02 08:24:17.490798842 +0000 UTC m=+0.120960366 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:24:17 localhost systemd[1]: tmp-crun.fegpL0.mount: Deactivated successfully. Dec 2 03:24:17 localhost podman[77566]: 2025-12-02 08:24:17.506507116 +0000 UTC m=+0.141993693 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:24:17 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:24:17 localhost podman[77574]: 2025-12-02 08:24:17.570397671 +0000 UTC m=+0.195447566 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi) Dec 2 03:24:17 localhost podman[77565]: 2025-12-02 08:24:17.623577886 +0000 UTC m=+0.259531606 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, url=https://www.redhat.com, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Dec 2 03:24:17 localhost podman[77565]: 2025-12-02 08:24:17.638050936 +0000 UTC m=+0.274004656 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Dec 2 03:24:17 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:24:17 localhost podman[77574]: 2025-12-02 08:24:17.65339773 +0000 UTC m=+0.278447655 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:24:17 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:24:17 localhost podman[77566]: 2025-12-02 08:24:17.854234301 +0000 UTC m=+0.489720838 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1) Dec 2 03:24:17 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:24:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:24:19 localhost systemd[1]: tmp-crun.JWQkzb.mount: Deactivated successfully. Dec 2 03:24:19 localhost podman[77660]: 2025-12-02 08:24:19.436950185 +0000 UTC m=+0.078861160 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, version=17.1.12, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 2 03:24:19 localhost podman[77660]: 2025-12-02 08:24:19.498029183 +0000 UTC m=+0.139940208 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team) Dec 2 03:24:19 localhost podman[77660]: unhealthy Dec 2 03:24:19 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:24:19 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 03:24:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:24:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:24:21 localhost podman[77683]: 2025-12-02 08:24:21.452290517 +0000 UTC m=+0.088778467 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:24:21 localhost podman[77683]: 2025-12-02 08:24:21.489970773 +0000 UTC m=+0.126458733 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:24:21 localhost podman[77684]: 2025-12-02 08:24:21.502298416 +0000 UTC m=+0.135335763 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:24:21 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:24:21 localhost podman[77684]: 2025-12-02 08:24:21.557121226 +0000 UTC m=+0.190158163 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller) Dec 2 03:24:21 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:24:29 localhost podman[77729]: 2025-12-02 08:24:29.449825596 +0000 UTC m=+0.088915560 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, version=17.1.12, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Dec 2 03:24:29 localhost podman[77729]: 2025-12-02 08:24:29.462969611 +0000 UTC m=+0.102059565 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, distribution-scope=public) Dec 2 03:24:29 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:24:29 localhost systemd[1]: tmp-crun.UVCJfQ.mount: Deactivated successfully. Dec 2 03:24:29 localhost podman[77730]: 2025-12-02 08:24:29.554088161 +0000 UTC m=+0.190789270 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Dec 2 03:24:29 localhost podman[77730]: 2025-12-02 08:24:29.593934186 +0000 UTC m=+0.230635335 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com) Dec 2 03:24:29 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:24:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:24:42 localhost systemd[1]: tmp-crun.AWoR36.mount: Deactivated successfully. Dec 2 03:24:42 localhost podman[77783]: 2025-12-02 08:24:42.025459423 +0000 UTC m=+0.095154798 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:24:42 localhost podman[77783]: 2025-12-02 08:24:42.222101721 +0000 UTC m=+0.291797066 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z) Dec 2 03:24:42 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:24:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:24:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:24:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:24:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:24:48 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:24:48 localhost recover_tripleo_nova_virtqemud[77936]: 62312 Dec 2 03:24:48 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:24:48 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:24:48 localhost systemd[1]: tmp-crun.kAfyqm.mount: Deactivated successfully. Dec 2 03:24:48 localhost podman[77908]: 2025-12-02 08:24:48.358197656 +0000 UTC m=+0.091449610 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 03:24:48 localhost systemd[1]: tmp-crun.pLC2Jz.mount: Deactivated successfully. Dec 2 03:24:48 localhost podman[77901]: 2025-12-02 08:24:48.410661742 +0000 UTC m=+0.145686003 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container) Dec 2 03:24:48 localhost podman[77908]: 2025-12-02 08:24:48.418891744 +0000 UTC m=+0.152143688 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:24:48 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:24:48 localhost podman[77902]: 2025-12-02 08:24:48.394757582 +0000 UTC m=+0.126866224 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:24:48 localhost podman[77901]: 2025-12-02 08:24:48.469633233 +0000 UTC m=+0.204657494 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:24:48 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:24:48 localhost podman[77909]: 2025-12-02 08:24:48.542395107 +0000 UTC m=+0.271287263 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:24:48 localhost podman[77909]: 2025-12-02 08:24:48.576044225 +0000 UTC m=+0.304936391 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:24:48 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:24:48 localhost podman[77902]: 2025-12-02 08:24:48.744035389 +0000 UTC m=+0.476144041 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 2 03:24:48 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:24:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:24:50 localhost systemd[1]: tmp-crun.SFRoDW.mount: Deactivated successfully. Dec 2 03:24:50 localhost podman[78037]: 2025-12-02 08:24:50.402362065 +0000 UTC m=+0.049908668 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:24:50 localhost podman[78037]: 2025-12-02 08:24:50.423859174 +0000 UTC m=+0.071405767 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=) Dec 2 03:24:50 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:24:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:24:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:24:52 localhost systemd[1]: tmp-crun.HrrkOl.mount: Deactivated successfully. Dec 2 03:24:52 localhost podman[78085]: 2025-12-02 08:24:52.443719177 +0000 UTC m=+0.085700294 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 2 03:24:52 localhost systemd[1]: tmp-crun.FPgeRa.mount: Deactivated successfully. Dec 2 03:24:52 localhost podman[78086]: 2025-12-02 08:24:52.497281303 +0000 UTC m=+0.135830477 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:24:52 localhost podman[78085]: 2025-12-02 08:24:52.512931965 +0000 UTC m=+0.154913072 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Dec 2 03:24:52 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:24:52 localhost podman[78086]: 2025-12-02 08:24:52.5712924 +0000 UTC m=+0.209841554 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible) Dec 2 03:24:52 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: libpod-bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d.scope: Deactivated successfully. Dec 2 03:24:58 localhost podman[78134]: 2025-12-02 08:24:58.544778515 +0000 UTC m=+0.049855836 container died bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Dec 2 03:24:58 localhost systemd[1]: tmp-crun.OSxXXO.mount: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d-userdata-shm.mount: Deactivated successfully. Dec 2 03:24:58 localhost systemd[1]: var-lib-containers-storage-overlay-8c52eb2917af814f67bf9757f04611b4867e02cd94735e31ef932542a90a8de8-merged.mount: Deactivated successfully. Dec 2 03:24:58 localhost podman[78134]: 2025-12-02 08:24:58.586245314 +0000 UTC m=+0.091322575 container cleanup bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, container_name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute) Dec 2 03:24:58 localhost systemd[1]: libpod-conmon-bab4f14ce85ea50a857b87bac282c7122719c952a0f3e60419c5d8fdba42705d.scope: Deactivated successfully. Dec 2 03:24:58 localhost python3[76195]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=ff8ff724cb5f0d02131158e2fae849b6 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 2 03:24:59 localhost python3[78188]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:24:59 localhost python3[78204]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 2 03:24:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:24:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:25:00 localhost systemd[1]: tmp-crun.nCQip1.mount: Deactivated successfully. Dec 2 03:25:00 localhost podman[78267]: 2025-12-02 08:25:00.040943943 +0000 UTC m=+0.112993050 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:25:00 localhost python3[78265]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663899.4964612-118006-217783275407979/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:00 localhost podman[78267]: 2025-12-02 08:25:00.046565955 +0000 UTC m=+0.118615062 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z) Dec 2 03:25:00 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:25:00 localhost podman[78266]: 2025-12-02 08:25:00.022335922 +0000 UTC m=+0.093612808 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:25:00 localhost podman[78266]: 2025-12-02 08:25:00.103860172 +0000 UTC m=+0.175136998 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:25:00 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:25:00 localhost python3[78319]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 03:25:00 localhost systemd[1]: Reloading. Dec 2 03:25:00 localhost systemd-rc-local-generator[78344]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:00 localhost systemd-sysv-generator[78347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:01 localhost python3[78371]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 03:25:01 localhost systemd[1]: Reloading. Dec 2 03:25:01 localhost systemd-rc-local-generator[78397]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:01 localhost systemd-sysv-generator[78400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:01 localhost systemd[1]: Starting nova_compute container... Dec 2 03:25:01 localhost tripleo-start-podman-container[78411]: Creating additional drop-in dependency for "nova_compute" (1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1) Dec 2 03:25:01 localhost systemd[1]: Reloading. Dec 2 03:25:01 localhost systemd-sysv-generator[78472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 03:25:01 localhost systemd-rc-local-generator[78467]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 03:25:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 03:25:02 localhost systemd[1]: Started nova_compute container. Dec 2 03:25:02 localhost python3[78509]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:03 localhost python3[78630]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005541913 step=5 update_config_hash_only=False Dec 2 03:25:04 localhost python3[78646]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 03:25:04 localhost python3[78662]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 2 03:25:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:25:12 localhost podman[78663]: 2025-12-02 08:25:12.443862159 +0000 UTC m=+0.086188507 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_id=tripleo_step1, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:25:12 localhost podman[78663]: 2025-12-02 08:25:12.641094942 +0000 UTC m=+0.283421340 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, managed_by=tripleo_ansible) Dec 2 03:25:12 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:25:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:25:19 localhost podman[78692]: 2025-12-02 08:25:19.469975363 +0000 UTC m=+0.106893216 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64) Dec 2 03:25:19 localhost systemd[1]: tmp-crun.Kb6RBg.mount: Deactivated successfully. Dec 2 03:25:19 localhost podman[78694]: 2025-12-02 08:25:19.522211623 +0000 UTC m=+0.150537514 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute) Dec 2 03:25:19 localhost podman[78693]: 2025-12-02 08:25:19.556693083 +0000 UTC m=+0.187933703 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:25:19 localhost podman[78692]: 2025-12-02 08:25:19.583431274 +0000 UTC m=+0.220349117 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:25:19 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:25:19 localhost podman[78694]: 2025-12-02 08:25:19.60620872 +0000 UTC m=+0.234534621 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 03:25:19 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:25:19 localhost podman[78700]: 2025-12-02 08:25:19.661037109 +0000 UTC m=+0.287668875 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:25:19 localhost podman[78700]: 2025-12-02 08:25:19.673332981 +0000 UTC m=+0.299964727 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible) Dec 2 03:25:19 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:25:19 localhost podman[78693]: 2025-12-02 08:25:19.932005072 +0000 UTC m=+0.563245612 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:25:19 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:25:20 localhost systemd[1]: tmp-crun.DlxCwn.mount: Deactivated successfully. Dec 2 03:25:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:25:20 localhost systemd[1]: tmp-crun.Pgqgca.mount: Deactivated successfully. Dec 2 03:25:20 localhost podman[78787]: 2025-12-02 08:25:20.549766594 +0000 UTC m=+0.067963945 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044) Dec 2 03:25:20 localhost podman[78787]: 2025-12-02 08:25:20.599118617 +0000 UTC m=+0.117315928 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, container_name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:25:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:25:23 localhost podman[78812]: 2025-12-02 08:25:23.45087326 +0000 UTC m=+0.090091692 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:25:23 localhost podman[78813]: 2025-12-02 08:25:23.510836518 +0000 UTC m=+0.145575180 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:25:23 localhost podman[78812]: 2025-12-02 08:25:23.526214633 +0000 UTC m=+0.165433085 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent) Dec 2 03:25:23 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:25:23 localhost podman[78813]: 2025-12-02 08:25:23.55979958 +0000 UTC m=+0.194538302 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4) Dec 2 03:25:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:25:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:25:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:25:30 localhost podman[78857]: 2025-12-02 08:25:30.417805255 +0000 UTC m=+0.061979404 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:25:30 localhost podman[78857]: 2025-12-02 08:25:30.43837906 +0000 UTC m=+0.082553249 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, container_name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:25:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:25:30 localhost podman[78858]: 2025-12-02 08:25:30.435275996 +0000 UTC m=+0.072632621 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:25:30 localhost podman[78858]: 2025-12-02 08:25:30.51432707 +0000 UTC m=+0.151683715 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:25:30 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:25:30 localhost sshd[78895]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:25:31 localhost systemd-logind[757]: New session 34 of user zuul. Dec 2 03:25:31 localhost systemd[1]: Started Session 34 of User zuul. Dec 2 03:25:32 localhost python3[79004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 03:25:39 localhost python3[79267]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Dec 2 03:25:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:25:43 localhost podman[79284]: 2025-12-02 08:25:43.421680718 +0000 UTC m=+0.063715980 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com) Dec 2 03:25:43 localhost podman[79284]: 2025-12-02 08:25:43.608689326 +0000 UTC m=+0.250724628 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, container_name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:25:43 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:25:47 localhost python3[79465]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Dec 2 03:25:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Dec 2 03:25:47 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Dec 2 03:25:47 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 03:25:47 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 03:25:47 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:25:50 localhost podman[79535]: 2025-12-02 08:25:50.44919462 +0000 UTC m=+0.085984991 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=nova_migration_target, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible) Dec 2 03:25:50 localhost systemd[1]: tmp-crun.R09bUh.mount: Deactivated successfully. Dec 2 03:25:50 localhost podman[79536]: 2025-12-02 08:25:50.529264981 +0000 UTC m=+0.160741338 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1) Dec 2 03:25:50 localhost podman[79546]: 2025-12-02 08:25:50.581978174 +0000 UTC m=+0.210106881 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Dec 2 03:25:50 localhost podman[79536]: 2025-12-02 08:25:50.60480279 +0000 UTC m=+0.236279157 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:25:50 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:25:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:25:50 localhost podman[79546]: 2025-12-02 08:25:50.643200667 +0000 UTC m=+0.271329364 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 2 03:25:50 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:25:50 localhost podman[79534]: 2025-12-02 08:25:50.506940789 +0000 UTC m=+0.142970740 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 2 03:25:50 localhost podman[79534]: 2025-12-02 08:25:50.691189592 +0000 UTC m=+0.327219563 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:25:50 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:25:50 localhost podman[79624]: 2025-12-02 08:25:50.692038424 +0000 UTC m=+0.060480592 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-nova-compute) Dec 2 03:25:50 localhost podman[79624]: 2025-12-02 08:25:50.778109038 +0000 UTC m=+0.146551236 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:25:50 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:25:50 localhost podman[79535]: 2025-12-02 08:25:50.800155463 +0000 UTC m=+0.436945864 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_migration_target, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:25:50 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:25:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:25:54 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:25:54 localhost recover_tripleo_nova_virtqemud[79663]: 62312 Dec 2 03:25:54 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:25:54 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:25:54 localhost podman[79651]: 2025-12-02 08:25:54.423679235 +0000 UTC m=+0.066774133 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:25:54 localhost podman[79650]: 2025-12-02 08:25:54.482714398 +0000 UTC m=+0.124741687 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:25:54 localhost podman[79650]: 2025-12-02 08:25:54.512365769 +0000 UTC m=+0.154393048 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true) Dec 2 03:25:54 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:25:54 localhost podman[79651]: 2025-12-02 08:25:54.568316729 +0000 UTC m=+0.211411597 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4) Dec 2 03:25:54 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:26:01 localhost podman[79699]: 2025-12-02 08:26:01.445401612 +0000 UTC m=+0.087875194 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:26:01 localhost podman[79699]: 2025-12-02 08:26:01.460223241 +0000 UTC m=+0.102696893 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid) Dec 2 03:26:01 localhost podman[79698]: 2025-12-02 08:26:01.477845487 +0000 UTC m=+0.120527804 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, container_name=collectd, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=) Dec 2 03:26:01 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:26:01 localhost podman[79698]: 2025-12-02 08:26:01.488042972 +0000 UTC m=+0.130725299 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git) Dec 2 03:26:01 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:26:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:26:14 localhost podman[79740]: 2025-12-02 08:26:14.446321766 +0000 UTC m=+0.083374652 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:26:14 localhost podman[79740]: 2025-12-02 08:26:14.662058998 +0000 UTC m=+0.299111754 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:26:14 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:26:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:26:21 localhost podman[79772]: 2025-12-02 08:26:21.485771279 +0000 UTC m=+0.097640525 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 2 03:26:21 localhost systemd[1]: tmp-crun.8JDPRH.mount: Deactivated successfully. Dec 2 03:26:21 localhost podman[79773]: 2025-12-02 08:26:21.544826634 +0000 UTC m=+0.151498861 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public) Dec 2 03:26:21 localhost podman[79773]: 2025-12-02 08:26:21.567938097 +0000 UTC m=+0.174610334 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:26:21 localhost podman[79769]: 2025-12-02 08:26:21.522724927 +0000 UTC m=+0.138864979 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team) Dec 2 03:26:21 localhost podman[79769]: 2025-12-02 08:26:21.605813109 +0000 UTC m=+0.221953131 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:26:21 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:26:21 localhost podman[79772]: 2025-12-02 08:26:21.618864381 +0000 UTC m=+0.230733567 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:26:21 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:26:21 localhost podman[79770]: 2025-12-02 08:26:21.670975048 +0000 UTC m=+0.286952726 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:26:21 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:26:21 localhost podman[79771]: 2025-12-02 08:26:21.620592829 +0000 UTC m=+0.234201993 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, container_name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:26:21 localhost podman[79771]: 2025-12-02 08:26:21.751144911 +0000 UTC m=+0.364754075 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4) Dec 2 03:26:21 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:26:22 localhost podman[79770]: 2025-12-02 08:26:22.025947378 +0000 UTC m=+0.641925066 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 2 03:26:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:26:22 localhost systemd[1]: tmp-crun.zVmcNf.mount: Deactivated successfully. Dec 2 03:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:26:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:26:25 localhost podman[79890]: 2025-12-02 08:26:25.442630599 +0000 UTC m=+0.084753899 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, container_name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team) Dec 2 03:26:25 localhost podman[79890]: 2025-12-02 08:26:25.477180221 +0000 UTC m=+0.119303511 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:26:25 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:26:25 localhost podman[79891]: 2025-12-02 08:26:25.48829413 +0000 UTC m=+0.130174294 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, vcs-type=git, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:26:25 localhost podman[79891]: 2025-12-02 08:26:25.538095155 +0000 UTC m=+0.179975338 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:26:25 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:26:32 localhost podman[79940]: 2025-12-02 08:26:32.443637504 +0000 UTC m=+0.083467704 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:26:32 localhost podman[79940]: 2025-12-02 08:26:32.479455001 +0000 UTC m=+0.119285181 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 2 03:26:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:26:32 localhost podman[79939]: 2025-12-02 08:26:32.496163682 +0000 UTC m=+0.138340815 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd) Dec 2 03:26:32 localhost podman[79939]: 2025-12-02 08:26:32.50796282 +0000 UTC m=+0.150139943 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:26:32 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:26:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:26:45 localhost podman[79991]: 2025-12-02 08:26:45.042761507 +0000 UTC m=+0.075438047 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-type=git) Dec 2 03:26:45 localhost podman[79991]: 2025-12-02 08:26:45.196530716 +0000 UTC m=+0.229207196 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true) Dec 2 03:26:45 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:26:46 localhost systemd[1]: session-34.scope: Deactivated successfully. Dec 2 03:26:46 localhost systemd[1]: session-34.scope: Consumed 5.803s CPU time. Dec 2 03:26:46 localhost systemd-logind[757]: Session 34 logged out. Waiting for processes to exit. Dec 2 03:26:46 localhost systemd-logind[757]: Removed session 34. Dec 2 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:26:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:26:52 localhost podman[80127]: 2025-12-02 08:26:52.458996309 +0000 UTC m=+0.090511603 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 2 03:26:52 localhost podman[80127]: 2025-12-02 08:26:52.471965719 +0000 UTC m=+0.103481033 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:26:52 localhost systemd[1]: tmp-crun.mKKSF1.mount: Deactivated successfully. Dec 2 03:26:52 localhost podman[80130]: 2025-12-02 08:26:52.484075336 +0000 UTC m=+0.103292869 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true) Dec 2 03:26:52 localhost podman[80128]: 2025-12-02 08:26:52.488150386 +0000 UTC m=+0.120204895 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:26:52 localhost podman[80130]: 2025-12-02 08:26:52.511859816 +0000 UTC m=+0.131077319 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:26:52 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:26:52 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:26:52 localhost podman[80129]: 2025-12-02 08:26:52.566265864 +0000 UTC m=+0.193175614 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 03:26:52 localhost podman[80141]: 2025-12-02 08:26:52.609773669 +0000 UTC m=+0.228179110 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:26:52 localhost podman[80129]: 2025-12-02 08:26:52.621190507 +0000 UTC m=+0.248100287 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team) Dec 2 03:26:52 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:26:52 localhost podman[80141]: 2025-12-02 08:26:52.639464159 +0000 UTC m=+0.257869600 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, version=17.1.12) Dec 2 03:26:52 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:26:52 localhost podman[80128]: 2025-12-02 08:26:52.855031367 +0000 UTC m=+0.487085896 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4) Dec 2 03:26:52 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:26:56 localhost systemd[1]: tmp-crun.GLYFPr.mount: Deactivated successfully. Dec 2 03:26:56 localhost podman[80245]: 2025-12-02 08:26:56.437815071 +0000 UTC m=+0.082352743 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=) Dec 2 03:26:56 localhost systemd[1]: tmp-crun.gCejVv.mount: Deactivated successfully. Dec 2 03:26:56 localhost podman[80246]: 2025-12-02 08:26:56.493059502 +0000 UTC m=+0.130487273 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, tcib_managed=true, release=1761123044) Dec 2 03:26:56 localhost podman[80245]: 2025-12-02 08:26:56.501965283 +0000 UTC m=+0.146502955 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 03:26:56 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:26:56 localhost podman[80246]: 2025-12-02 08:26:56.543098213 +0000 UTC m=+0.180525994 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller) Dec 2 03:26:56 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:26:59 localhost sshd[80292]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:26:59 localhost systemd-logind[757]: New session 35 of user zuul. Dec 2 03:26:59 localhost systemd[1]: Started Session 35 of User zuul. Dec 2 03:26:59 localhost python3[80311]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 03:27:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:27:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:27:03 localhost podman[80314]: 2025-12-02 08:27:03.436412023 +0000 UTC m=+0.074611324 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public) Dec 2 03:27:03 localhost podman[80314]: 2025-12-02 08:27:03.447894233 +0000 UTC m=+0.086093534 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:27:03 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:27:03 localhost podman[80313]: 2025-12-02 08:27:03.530345858 +0000 UTC m=+0.168501068 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, release=1761123044, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:27:03 localhost podman[80313]: 2025-12-02 08:27:03.566070653 +0000 UTC m=+0.204225883 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:27:03 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:27:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:27:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4435 writes, 20K keys, 4435 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4435 writes, 447 syncs, 9.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:27:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:27:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 5176 writes, 22K keys, 5176 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5176 writes, 608 syncs, 8.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:27:15 localhost podman[80354]: 2025-12-02 08:27:15.441148742 +0000 UTC m=+0.081597913 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_id=tripleo_step1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true) Dec 2 03:27:15 localhost podman[80354]: 2025-12-02 08:27:15.650010798 +0000 UTC m=+0.290459949 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1) Dec 2 03:27:15 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:27:23 localhost systemd[1]: tmp-crun.5e00ty.mount: Deactivated successfully. Dec 2 03:27:23 localhost podman[80383]: 2025-12-02 08:27:23.503462101 +0000 UTC m=+0.140510813 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible) Dec 2 03:27:23 localhost podman[80387]: 2025-12-02 08:27:23.462837754 +0000 UTC m=+0.093143985 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:27:23 localhost podman[80385]: 2025-12-02 08:27:23.43079629 +0000 UTC m=+0.064323958 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:27:23 localhost podman[80387]: 2025-12-02 08:27:23.543702336 +0000 UTC m=+0.174008577 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:27:23 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:27:23 localhost podman[80385]: 2025-12-02 08:27:23.564015084 +0000 UTC m=+0.197542772 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:27:23 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:27:23 localhost podman[80386]: 2025-12-02 08:27:23.542991967 +0000 UTC m=+0.175917968 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:27:23 localhost podman[80383]: 2025-12-02 08:27:23.615083593 +0000 UTC m=+0.252132355 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:27:23 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:27:23 localhost podman[80384]: 2025-12-02 08:27:23.694419594 +0000 UTC m=+0.331821817 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:27:23 localhost podman[80386]: 2025-12-02 08:27:23.724000082 +0000 UTC m=+0.356926133 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:27:23 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:27:24 localhost podman[80384]: 2025-12-02 08:27:24.075137779 +0000 UTC m=+0.712540002 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:27:24 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:27:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:27:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:27:27 localhost systemd[1]: tmp-crun.FBQzCU.mount: Deactivated successfully. Dec 2 03:27:27 localhost podman[80500]: 2025-12-02 08:27:27.427842484 +0000 UTC m=+0.070266258 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 2 03:27:27 localhost systemd[1]: tmp-crun.Da34Gi.mount: Deactivated successfully. Dec 2 03:27:27 localhost podman[80500]: 2025-12-02 08:27:27.504058501 +0000 UTC m=+0.146482295 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:27:27 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:27:27 localhost podman[80501]: 2025-12-02 08:27:27.506814065 +0000 UTC m=+0.143750301 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:27:27 localhost podman[80501]: 2025-12-02 08:27:27.591129561 +0000 UTC m=+0.228065837 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1) Dec 2 03:27:27 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:27:27 localhost python3[80545]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 2 03:27:31 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:27:31 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 03:27:31 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 03:27:31 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 03:27:31 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 03:27:31 localhost systemd[1]: run-ra022a57c9fd94d61b5edd3b255f82757.service: Deactivated successfully. Dec 2 03:27:31 localhost systemd[1]: run-r51c86fcc3b5f49f8a74ae00e936a7bd1.service: Deactivated successfully. Dec 2 03:27:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:27:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:27:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:27:34 localhost recover_tripleo_nova_virtqemud[80716]: 62312 Dec 2 03:27:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:27:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:27:34 localhost systemd[1]: tmp-crun.Y9alkA.mount: Deactivated successfully. Dec 2 03:27:34 localhost podman[80713]: 2025-12-02 08:27:34.447861502 +0000 UTC m=+0.085264722 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:27:34 localhost podman[80713]: 2025-12-02 08:27:34.456792004 +0000 UTC m=+0.094195304 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, tcib_managed=true) Dec 2 03:27:34 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:27:34 localhost systemd[1]: tmp-crun.IqvZqF.mount: Deactivated successfully. Dec 2 03:27:34 localhost podman[80714]: 2025-12-02 08:27:34.513242817 +0000 UTC m=+0.146538225 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, tcib_managed=true) Dec 2 03:27:34 localhost podman[80714]: 2025-12-02 08:27:34.550336558 +0000 UTC m=+0.183631946 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:27:34 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:27:46 localhost systemd[1]: tmp-crun.PmXE1g.mount: Deactivated successfully. Dec 2 03:27:46 localhost podman[80753]: 2025-12-02 08:27:46.451318197 +0000 UTC m=+0.095756966 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Dec 2 03:27:46 localhost podman[80753]: 2025-12-02 08:27:46.621441523 +0000 UTC m=+0.265880172 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12) Dec 2 03:27:46 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:27:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:27:54 localhost systemd[1]: tmp-crun.JMA51J.mount: Deactivated successfully. Dec 2 03:27:54 localhost systemd[1]: tmp-crun.VwICoc.mount: Deactivated successfully. Dec 2 03:27:54 localhost podman[80958]: 2025-12-02 08:27:54.543571196 +0000 UTC m=+0.149879043 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step5) Dec 2 03:27:54 localhost podman[80956]: 2025-12-02 08:27:54.50988008 +0000 UTC m=+0.119149882 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, container_name=logrotate_crond, version=17.1.12) Dec 2 03:27:54 localhost podman[80960]: 2025-12-02 08:27:54.585965264 +0000 UTC m=+0.185674722 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:27:54 localhost podman[80958]: 2025-12-02 08:27:54.594887731 +0000 UTC m=+0.201195578 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:27:54 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80960]: 2025-12-02 08:27:54.636901299 +0000 UTC m=+0.236610757 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Dec 2 03:27:54 localhost podman[80959]: 2025-12-02 08:27:54.64245466 +0000 UTC m=+0.247420822 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12) Dec 2 03:27:54 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80957]: 2025-12-02 08:27:54.67740101 +0000 UTC m=+0.286750144 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:27:54 localhost podman[80959]: 2025-12-02 08:27:54.689108347 +0000 UTC m=+0.294074509 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:27:54 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80956]: 2025-12-02 08:27:54.743158011 +0000 UTC m=+0.352427753 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, container_name=logrotate_crond, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, batch=17.1_20251118.1) Dec 2 03:27:54 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:27:54 localhost podman[80957]: 2025-12-02 08:27:54.993124238 +0000 UTC m=+0.602473432 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:27:55 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:27:58 localhost podman[81078]: 2025-12-02 08:27:58.439142587 +0000 UTC m=+0.073883241 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:27:58 localhost podman[81077]: 2025-12-02 08:27:58.479547194 +0000 UTC m=+0.115726654 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 2 03:27:58 localhost podman[81078]: 2025-12-02 08:27:58.511444945 +0000 UTC m=+0.146185659 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller) Dec 2 03:27:58 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:27:58 localhost podman[81077]: 2025-12-02 08:27:58.527032382 +0000 UTC m=+0.163211832 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 03:27:58 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:28:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:28:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:28:05 localhost systemd[1]: tmp-crun.5YoRS7.mount: Deactivated successfully. Dec 2 03:28:05 localhost podman[81125]: 2025-12-02 08:28:05.432840712 +0000 UTC m=+0.071307594 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:51:28Z, version=17.1.12, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3) Dec 2 03:28:05 localhost podman[81125]: 2025-12-02 08:28:05.439326507 +0000 UTC m=+0.077793379 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, release=1761123044) Dec 2 03:28:05 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:28:05 localhost podman[81126]: 2025-12-02 08:28:05.48154061 +0000 UTC m=+0.117798646 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:28:05 localhost podman[81126]: 2025-12-02 08:28:05.515203226 +0000 UTC m=+0.151461282 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:28:05 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:28:15 localhost python3[81177]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:28:17 localhost systemd[1]: tmp-crun.XVnl76.mount: Deactivated successfully. Dec 2 03:28:17 localhost podman[81181]: 2025-12-02 08:28:17.421754899 +0000 UTC m=+0.066664096 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:28:17 localhost podman[81181]: 2025-12-02 08:28:17.601638503 +0000 UTC m=+0.246547680 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:28:17 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:28:19 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:28:19 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:28:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:28:25 localhost podman[81399]: 2025-12-02 08:28:25.432791262 +0000 UTC m=+0.065417364 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:28:25 localhost podman[81398]: 2025-12-02 08:28:25.441595037 +0000 UTC m=+0.073247565 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:28:25 localhost podman[81399]: 2025-12-02 08:28:25.485980005 +0000 UTC m=+0.118606097 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:28:25 localhost podman[81398]: 2025-12-02 08:28:25.495011195 +0000 UTC m=+0.126663703 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:28:25 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:28:25 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:28:25 localhost podman[81397]: 2025-12-02 08:28:25.538671385 +0000 UTC m=+0.176395417 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 2 03:28:25 localhost podman[81396]: 2025-12-02 08:28:25.565701362 +0000 UTC m=+0.202757277 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044) Dec 2 03:28:25 localhost podman[81396]: 2025-12-02 08:28:25.570432033 +0000 UTC m=+0.207487898 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 2 03:28:25 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:28:25 localhost podman[81410]: 2025-12-02 08:28:25.488917529 +0000 UTC m=+0.115950979 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true) Dec 2 03:28:25 localhost podman[81410]: 2025-12-02 08:28:25.622073526 +0000 UTC m=+0.249106946 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:28:25 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:28:25 localhost podman[81397]: 2025-12-02 08:28:25.860519459 +0000 UTC m=+0.498243461 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_migration_target, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:28:25 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:28:26 localhost systemd[1]: tmp-crun.Jlcl5n.mount: Deactivated successfully. Dec 2 03:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:28:29 localhost podman[81515]: 2025-12-02 08:28:29.429411193 +0000 UTC m=+0.072502885 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:28:29 localhost podman[81516]: 2025-12-02 08:28:29.481392905 +0000 UTC m=+0.120311780 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:28:29 localhost podman[81515]: 2025-12-02 08:28:29.494109768 +0000 UTC m=+0.137201450 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 03:28:29 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:28:29 localhost podman[81516]: 2025-12-02 08:28:29.505672752 +0000 UTC m=+0.144591607 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:28:29 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:28:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:28:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:28:36 localhost podman[81564]: 2025-12-02 08:28:36.444693144 +0000 UTC m=+0.083166775 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:28:36 localhost podman[81564]: 2025-12-02 08:28:36.453739915 +0000 UTC m=+0.092213546 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:28:36 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:28:36 localhost podman[81563]: 2025-12-02 08:28:36.548531355 +0000 UTC m=+0.189129690 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:28:36 localhost podman[81563]: 2025-12-02 08:28:36.563213749 +0000 UTC m=+0.203812124 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:28:36 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:28:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:28:46 localhost recover_tripleo_nova_virtqemud[81603]: 62312 Dec 2 03:28:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:28:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:28:48 localhost systemd[1]: tmp-crun.P9gH4V.mount: Deactivated successfully. Dec 2 03:28:48 localhost podman[81604]: 2025-12-02 08:28:48.43077131 +0000 UTC m=+0.079270937 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:28:48 localhost podman[81604]: 2025-12-02 08:28:48.648441235 +0000 UTC m=+0.296940892 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:28:48 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:28:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:28:56 localhost podman[81756]: 2025-12-02 08:28:56.458263192 +0000 UTC m=+0.093134090 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:28:56 localhost podman[81757]: 2025-12-02 08:28:56.438970002 +0000 UTC m=+0.067408715 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, container_name=nova_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:28:56 localhost podman[81755]: 2025-12-02 08:28:56.504273142 +0000 UTC m=+0.139765485 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:28:56 localhost podman[81755]: 2025-12-02 08:28:56.515073147 +0000 UTC m=+0.150565480 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:28:56 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:28:56 localhost podman[81758]: 2025-12-02 08:28:56.554780536 +0000 UTC m=+0.179340121 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.12, io.openshift.expose-services=) Dec 2 03:28:56 localhost podman[81757]: 2025-12-02 08:28:56.572549738 +0000 UTC m=+0.200988551 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:28:56 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:28:56 localhost podman[81758]: 2025-12-02 08:28:56.62806062 +0000 UTC m=+0.252620195 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, release=1761123044, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public) Dec 2 03:28:56 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:28:56 localhost podman[81769]: 2025-12-02 08:28:56.701273161 +0000 UTC m=+0.323207179 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:28:56 localhost podman[81769]: 2025-12-02 08:28:56.729910049 +0000 UTC m=+0.351844057 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:28:56 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:28:56 localhost podman[81756]: 2025-12-02 08:28:56.803996604 +0000 UTC m=+0.438867502 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:28:56 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:29:00 localhost systemd[1]: tmp-crun.TwDO0U.mount: Deactivated successfully. Dec 2 03:29:00 localhost podman[81874]: 2025-12-02 08:29:00.423967176 +0000 UTC m=+0.066748208 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 03:29:00 localhost podman[81874]: 2025-12-02 08:29:00.479239872 +0000 UTC m=+0.122020824 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, container_name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team) Dec 2 03:29:00 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:29:00 localhost podman[81873]: 2025-12-02 08:29:00.432124114 +0000 UTC m=+0.075174703 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 03:29:00 localhost podman[81873]: 2025-12-02 08:29:00.566103781 +0000 UTC m=+0.209154410 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:29:00 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:29:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:29:08 localhost systemd[1]: tmp-crun.JU34lu.mount: Deactivated successfully. Dec 2 03:29:08 localhost podman[81923]: 2025-12-02 08:29:08.510585704 +0000 UTC m=+0.079276378 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:29:08 localhost podman[81924]: 2025-12-02 08:29:08.53009843 +0000 UTC m=+0.093685864 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1761123044, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Dec 2 03:29:08 localhost podman[81924]: 2025-12-02 08:29:08.538452492 +0000 UTC m=+0.102039976 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 03:29:08 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:29:08 localhost podman[81923]: 2025-12-02 08:29:08.59343919 +0000 UTC m=+0.162129924 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Dec 2 03:29:08 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:29:09 localhost systemd[1]: tmp-crun.Iu9Vmp.mount: Deactivated successfully. Dec 2 03:29:12 localhost python3[81976]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 03:29:15 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:29:15 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 03:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:29:19 localhost systemd[1]: tmp-crun.sWrQXk.mount: Deactivated successfully. Dec 2 03:29:19 localhost podman[82164]: 2025-12-02 08:29:19.452070987 +0000 UTC m=+0.094190876 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd) Dec 2 03:29:19 localhost podman[82164]: 2025-12-02 08:29:19.627941789 +0000 UTC m=+0.270061658 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr) Dec 2 03:29:19 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:29:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:29:27 localhost podman[82196]: 2025-12-02 08:29:27.488650131 +0000 UTC m=+0.120707001 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5) Dec 2 03:29:27 localhost podman[82201]: 2025-12-02 08:29:27.441500321 +0000 UTC m=+0.071545109 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, release=1761123044) Dec 2 03:29:27 localhost podman[82194]: 2025-12-02 08:29:27.49845773 +0000 UTC m=+0.135852686 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, name=rhosp17/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:29:27 localhost podman[82194]: 2025-12-02 08:29:27.53305959 +0000 UTC m=+0.170454546 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, config_id=tripleo_step4, release=1761123044) Dec 2 03:29:27 localhost podman[82195]: 2025-12-02 08:29:27.538501468 +0000 UTC m=+0.173865702 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container) Dec 2 03:29:27 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:29:27 localhost podman[82196]: 2025-12-02 08:29:27.542653185 +0000 UTC m=+0.174710045 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, version=17.1.12) Dec 2 03:29:27 localhost podman[82208]: 2025-12-02 08:29:27.463346747 +0000 UTC m=+0.086267645 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1) Dec 2 03:29:27 localhost podman[82201]: 2025-12-02 08:29:27.572546364 +0000 UTC m=+0.202591152 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z) Dec 2 03:29:27 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:29:27 localhost podman[82208]: 2025-12-02 08:29:27.592288446 +0000 UTC m=+0.215209344 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:29:27 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:29:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:29:27 localhost podman[82195]: 2025-12-02 08:29:27.88499293 +0000 UTC m=+0.520357214 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container) Dec 2 03:29:27 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:29:28 localhost systemd[1]: tmp-crun.31tALY.mount: Deactivated successfully. Dec 2 03:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:29:31 localhost podman[82313]: 2025-12-02 08:29:31.441907558 +0000 UTC m=+0.082177431 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 03:29:31 localhost systemd[1]: tmp-crun.r4syXG.mount: Deactivated successfully. Dec 2 03:29:31 localhost podman[82312]: 2025-12-02 08:29:31.496815174 +0000 UTC m=+0.139704223 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:29:31 localhost podman[82313]: 2025-12-02 08:29:31.519009489 +0000 UTC m=+0.159279362 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 2 03:29:31 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:29:31 localhost podman[82312]: 2025-12-02 08:29:31.571016682 +0000 UTC m=+0.213905631 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:29:31 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:29:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:29:39 localhost systemd[1]: tmp-crun.SS5eL9.mount: Deactivated successfully. Dec 2 03:29:39 localhost podman[82357]: 2025-12-02 08:29:39.4670641 +0000 UTC m=+0.089263381 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z) Dec 2 03:29:39 localhost podman[82357]: 2025-12-02 08:29:39.478999114 +0000 UTC m=+0.101198445 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, container_name=collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:29:39 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:29:39 localhost podman[82358]: 2025-12-02 08:29:39.574880802 +0000 UTC m=+0.193301816 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T23:44:13Z) Dec 2 03:29:39 localhost podman[82358]: 2025-12-02 08:29:39.585134373 +0000 UTC m=+0.203555397 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:29:39 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:29:42 localhost python3[82409]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 2 03:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:29:50 localhost podman[82410]: 2025-12-02 08:29:50.440884384 +0000 UTC m=+0.077620244 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, release=1761123044) Dec 2 03:29:50 localhost podman[82410]: 2025-12-02 08:29:50.636198611 +0000 UTC m=+0.272934461 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:29:50 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:29:53 localhost podman[82587]: 2025-12-02 08:29:53.370395199 +0000 UTC m=+0.085394342 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 03:29:53 localhost podman[82587]: 2025-12-02 08:29:53.461038175 +0000 UTC m=+0.176037358 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.buildah.version=1.41.4) Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:29:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:29:58 localhost podman[82730]: 2025-12-02 08:29:58.479081129 +0000 UTC m=+0.108800678 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z) Dec 2 03:29:58 localhost podman[82729]: 2025-12-02 08:29:58.527548351 +0000 UTC m=+0.157246460 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1) Dec 2 03:29:58 localhost podman[82733]: 2025-12-02 08:29:58.58176311 +0000 UTC m=+0.204726238 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true) Dec 2 03:29:58 localhost podman[82733]: 2025-12-02 08:29:58.609493485 +0000 UTC m=+0.232456623 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:29:58 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82731]: 2025-12-02 08:29:58.635029775 +0000 UTC m=+0.264100288 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:29:58 localhost podman[82729]: 2025-12-02 08:29:58.646275481 +0000 UTC m=+0.275973570 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:29:58 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82731]: 2025-12-02 08:29:58.665088939 +0000 UTC m=+0.294159422 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, tcib_managed=true, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:29:58 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82732]: 2025-12-02 08:29:58.732662067 +0000 UTC m=+0.357230735 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Dec 2 03:29:58 localhost podman[82732]: 2025-12-02 08:29:58.764125908 +0000 UTC m=+0.388694536 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, vcs-type=git, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:29:58 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:29:58 localhost podman[82730]: 2025-12-02 08:29:58.813131513 +0000 UTC m=+0.442851092 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 2 03:29:58 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:30:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:30:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:30:02 localhost recover_tripleo_nova_virtqemud[82851]: 62312 Dec 2 03:30:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:30:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:30:02 localhost podman[82848]: 2025-12-02 08:30:02.455876085 +0000 UTC m=+0.091089497 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, distribution-scope=public) Dec 2 03:30:02 localhost podman[82849]: 2025-12-02 08:30:02.500139761 +0000 UTC m=+0.135394674 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:30:02 localhost podman[82848]: 2025-12-02 08:30:02.53507931 +0000 UTC m=+0.170292682 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:02 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:30:02 localhost podman[82849]: 2025-12-02 08:30:02.55157849 +0000 UTC m=+0.186833423 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, release=1761123044, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:30:02 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:30:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:30:10 localhost podman[82897]: 2025-12-02 08:30:10.455591312 +0000 UTC m=+0.094309540 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, url=https://www.redhat.com) Dec 2 03:30:10 localhost podman[82897]: 2025-12-02 08:30:10.499055988 +0000 UTC m=+0.137774206 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Dec 2 03:30:10 localhost systemd[1]: tmp-crun.FrxGOH.mount: Deactivated successfully. Dec 2 03:30:10 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:30:10 localhost podman[82898]: 2025-12-02 08:30:10.515482414 +0000 UTC m=+0.154323465 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:10 localhost podman[82898]: 2025-12-02 08:30:10.550254989 +0000 UTC m=+0.189096060 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, distribution-scope=public, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 03:30:10 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:30:21 localhost podman[82937]: 2025-12-02 08:30:21.450997596 +0000 UTC m=+0.088871922 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, release=1761123044, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Dec 2 03:30:21 localhost podman[82937]: 2025-12-02 08:30:21.650521729 +0000 UTC m=+0.288396065 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:30:21 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:30:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:30:29 localhost systemd[1]: tmp-crun.SfbreC.mount: Deactivated successfully. Dec 2 03:30:29 localhost podman[82970]: 2025-12-02 08:30:29.483232807 +0000 UTC m=+0.111700052 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:30:29 localhost podman[82968]: 2025-12-02 08:30:29.499153152 +0000 UTC m=+0.138719349 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-cron, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git) Dec 2 03:30:29 localhost podman[82976]: 2025-12-02 08:30:29.517736585 +0000 UTC m=+0.141226033 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:30:29 localhost podman[82968]: 2025-12-02 08:30:29.533240979 +0000 UTC m=+0.172807106 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true) Dec 2 03:30:29 localhost podman[82976]: 2025-12-02 08:30:29.540094114 +0000 UTC m=+0.163583492 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc.) Dec 2 03:30:29 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:30:29 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:30:29 localhost podman[82969]: 2025-12-02 08:30:29.447757996 +0000 UTC m=+0.084675605 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, release=1761123044, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_migration_target) Dec 2 03:30:29 localhost podman[82982]: 2025-12-02 08:30:29.60994043 +0000 UTC m=+0.234364291 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:30:29 localhost podman[82970]: 2025-12-02 08:30:29.66620173 +0000 UTC m=+0.294668945 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, vcs-type=git, config_id=tripleo_step5, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:30:29 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:30:29 localhost podman[82982]: 2025-12-02 08:30:29.698949223 +0000 UTC m=+0.323373144 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:30:29 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:30:29 localhost podman[82969]: 2025-12-02 08:30:29.815061786 +0000 UTC m=+0.451979385 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:30:29 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:30:30 localhost systemd[1]: tmp-crun.bu9tY5.mount: Deactivated successfully. Dec 2 03:30:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:30:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:30:33 localhost podman[83089]: 2025-12-02 08:30:33.4445391 +0000 UTC m=+0.084525950 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044) Dec 2 03:30:33 localhost podman[83089]: 2025-12-02 08:30:33.48580862 +0000 UTC m=+0.125795440 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z) Dec 2 03:30:33 localhost podman[83090]: 2025-12-02 08:30:33.500423971 +0000 UTC m=+0.133592177 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 2 03:30:33 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:30:33 localhost podman[83090]: 2025-12-02 08:30:33.520905343 +0000 UTC m=+0.154073509 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, vcs-type=git) Dec 2 03:30:33 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:30:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:30:41 localhost podman[83135]: 2025-12-02 08:30:41.441357594 +0000 UTC m=+0.080666503 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:30:41 localhost podman[83135]: 2025-12-02 08:30:41.457395672 +0000 UTC m=+0.096704571 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z) Dec 2 03:30:41 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:30:41 localhost podman[83136]: 2025-12-02 08:30:41.546710963 +0000 UTC m=+0.183529748 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, container_name=iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12) Dec 2 03:30:41 localhost podman[83136]: 2025-12-02 08:30:41.559133949 +0000 UTC m=+0.195952724 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:30:41 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:30:42 localhost systemd[1]: session-35.scope: Deactivated successfully. Dec 2 03:30:42 localhost systemd[1]: session-35.scope: Consumed 19.340s CPU time. Dec 2 03:30:42 localhost systemd-logind[757]: Session 35 logged out. Waiting for processes to exit. Dec 2 03:30:42 localhost systemd-logind[757]: Removed session 35. Dec 2 03:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:30:52 localhost podman[83218]: 2025-12-02 08:30:52.436286895 +0000 UTC m=+0.078531158 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com) Dec 2 03:30:52 localhost podman[83218]: 2025-12-02 08:30:52.61701388 +0000 UTC m=+0.259258193 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, container_name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Dec 2 03:30:52 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:31:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:31:00 localhost systemd[1]: tmp-crun.iLJBEV.mount: Deactivated successfully. Dec 2 03:31:00 localhost podman[83326]: 2025-12-02 08:31:00.446898277 +0000 UTC m=+0.086128980 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:31:00 localhost podman[83327]: 2025-12-02 08:31:00.450002096 +0000 UTC m=+0.081548085 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:31:00 localhost podman[83325]: 2025-12-02 08:31:00.496692173 +0000 UTC m=+0.135528897 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, container_name=logrotate_crond, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 2 03:31:00 localhost podman[83325]: 2025-12-02 08:31:00.50206966 +0000 UTC m=+0.140906394 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, vcs-type=git, name=rhosp17/openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:31:00 localhost podman[83338]: 2025-12-02 08:31:00.509954841 +0000 UTC m=+0.136549764 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:31:00 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83338]: 2025-12-02 08:31:00.530241527 +0000 UTC m=+0.156836500 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Dec 2 03:31:00 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83327]: 2025-12-02 08:31:00.543368981 +0000 UTC m=+0.174914960 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, release=1761123044, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:31:00 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83339]: 2025-12-02 08:31:00.613292659 +0000 UTC m=+0.236001453 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:12:45Z) Dec 2 03:31:00 localhost podman[83339]: 2025-12-02 08:31:00.633360209 +0000 UTC m=+0.256068943 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 2 03:31:00 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:31:00 localhost podman[83326]: 2025-12-02 08:31:00.815052049 +0000 UTC m=+0.454282782 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Dec 2 03:31:00 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:31:01 localhost systemd[1]: tmp-crun.qkG2g0.mount: Deactivated successfully. Dec 2 03:31:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:31:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:31:04 localhost systemd[1]: tmp-crun.sX7kJT.mount: Deactivated successfully. Dec 2 03:31:04 localhost podman[83446]: 2025-12-02 08:31:04.445750806 +0000 UTC m=+0.085947247 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 2 03:31:04 localhost systemd[1]: tmp-crun.5LTvCC.mount: Deactivated successfully. Dec 2 03:31:04 localhost podman[83447]: 2025-12-02 08:31:04.501863533 +0000 UTC m=+0.137704532 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:31:04 localhost podman[83446]: 2025-12-02 08:31:04.52026209 +0000 UTC m=+0.160458521 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:31:04 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:31:04 localhost podman[83447]: 2025-12-02 08:31:04.555306642 +0000 UTC m=+0.191147601 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, release=1761123044, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 2 03:31:04 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:31:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:31:12 localhost systemd[1]: tmp-crun.djyhon.mount: Deactivated successfully. Dec 2 03:31:12 localhost podman[83494]: 2025-12-02 08:31:12.455800176 +0000 UTC m=+0.091558160 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T23:44:13Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid) Dec 2 03:31:12 localhost podman[83493]: 2025-12-02 08:31:12.458822443 +0000 UTC m=+0.098308111 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, architecture=x86_64) Dec 2 03:31:12 localhost podman[83493]: 2025-12-02 08:31:12.47092805 +0000 UTC m=+0.110413658 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3) Dec 2 03:31:12 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:31:12 localhost podman[83494]: 2025-12-02 08:31:12.489905473 +0000 UTC m=+0.125663467 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid) Dec 2 03:31:12 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:31:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:31:23 localhost recover_tripleo_nova_virtqemud[83534]: 62312 Dec 2 03:31:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:31:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:31:23 localhost podman[83532]: 2025-12-02 08:31:23.439636018 +0000 UTC m=+0.080756917 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:31:23 localhost podman[83532]: 2025-12-02 08:31:23.662822877 +0000 UTC m=+0.303943796 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:31:23 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:31:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:31:31 localhost podman[83565]: 2025-12-02 08:31:31.487967382 +0000 UTC m=+0.122234332 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:31:31 localhost podman[83567]: 2025-12-02 08:31:31.539516 +0000 UTC m=+0.169017369 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z) Dec 2 03:31:31 localhost podman[83567]: 2025-12-02 08:31:31.558016256 +0000 UTC m=+0.187517585 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:31:31 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83566]: 2025-12-02 08:31:31.457256732 +0000 UTC m=+0.093196917 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044) Dec 2 03:31:31 localhost podman[83566]: 2025-12-02 08:31:31.640390486 +0000 UTC m=+0.276330661 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Dec 2 03:31:31 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83573]: 2025-12-02 08:31:31.685429547 +0000 UTC m=+0.311383040 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:31:31 localhost podman[83573]: 2025-12-02 08:31:31.710930635 +0000 UTC m=+0.336884128 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12) Dec 2 03:31:31 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83564]: 2025-12-02 08:31:31.511656669 +0000 UTC m=+0.148609702 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 2 03:31:31 localhost podman[83564]: 2025-12-02 08:31:31.799075363 +0000 UTC m=+0.436028396 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., container_name=logrotate_crond, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.expose-services=) Dec 2 03:31:31 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:31:31 localhost podman[83565]: 2025-12-02 08:31:31.831056367 +0000 UTC m=+0.465323337 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:31:31 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:31:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:31:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:31:35 localhost systemd[1]: tmp-crun.KPcC3E.mount: Deactivated successfully. Dec 2 03:31:35 localhost podman[83684]: 2025-12-02 08:31:35.42657447 +0000 UTC m=+0.072183384 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:31:35 localhost systemd[1]: tmp-crun.ypykxX.mount: Deactivated successfully. Dec 2 03:31:35 localhost podman[83685]: 2025-12-02 08:31:35.450668827 +0000 UTC m=+0.089543517 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:31:35 localhost podman[83685]: 2025-12-02 08:31:35.469995056 +0000 UTC m=+0.108869746 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:31:35 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:31:35 localhost podman[83684]: 2025-12-02 08:31:35.496982533 +0000 UTC m=+0.142591507 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 2 03:31:35 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:31:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:31:43 localhost systemd[1]: tmp-crun.YyFUux.mount: Deactivated successfully. Dec 2 03:31:43 localhost podman[83730]: 2025-12-02 08:31:43.448238811 +0000 UTC m=+0.086842984 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:31:43 localhost podman[83729]: 2025-12-02 08:31:43.418072848 +0000 UTC m=+0.063354923 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Dec 2 03:31:43 localhost podman[83730]: 2025-12-02 08:31:43.480896984 +0000 UTC m=+0.119501127 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044) Dec 2 03:31:43 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:31:43 localhost podman[83729]: 2025-12-02 08:31:43.50195666 +0000 UTC m=+0.147238705 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:31:43 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:31:53 localhost systemd-logind[757]: Existing logind session ID 29 used by new audit session, ignoring. Dec 2 03:31:53 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 03:31:53 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 03:31:53 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 03:31:53 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 03:31:53 localhost systemd[84191]: Queued start job for default target Main User Target. Dec 2 03:31:53 localhost systemd[84191]: Created slice User Application Slice. Dec 2 03:31:53 localhost systemd[84191]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 03:31:53 localhost systemd[84191]: Started Daily Cleanup of User's Temporary Directories. Dec 2 03:31:53 localhost systemd[84191]: Reached target Paths. Dec 2 03:31:53 localhost systemd[84191]: Reached target Timers. Dec 2 03:31:53 localhost systemd[84191]: Starting D-Bus User Message Bus Socket... Dec 2 03:31:53 localhost systemd[84191]: Starting Create User's Volatile Files and Directories... Dec 2 03:31:53 localhost systemd[84191]: Listening on D-Bus User Message Bus Socket. Dec 2 03:31:53 localhost systemd[84191]: Reached target Sockets. Dec 2 03:31:53 localhost systemd[84191]: Finished Create User's Volatile Files and Directories. Dec 2 03:31:53 localhost systemd[84191]: Reached target Basic System. Dec 2 03:31:53 localhost systemd[84191]: Reached target Main User Target. Dec 2 03:31:53 localhost systemd[84191]: Startup finished in 149ms. Dec 2 03:31:53 localhost systemd[1]: Started User Manager for UID 0. Dec 2 03:31:53 localhost systemd[1]: Started Session c11 of User root. Dec 2 03:31:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:31:53 localhost podman[84207]: 2025-12-02 08:31:53.764983689 +0000 UTC m=+0.063513857 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:31:53 localhost podman[84207]: 2025-12-02 08:31:53.973931679 +0000 UTC m=+0.272461817 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 2 03:31:53 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:31:54 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Dec 2 03:31:54 localhost kernel: device tap4a318f6a-b3 entered promiscuous mode Dec 2 03:31:54 localhost NetworkManager[5965]: [1764664314.7696] manager: (tap4a318f6a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/13) Dec 2 03:31:54 localhost systemd-udevd[84255]: Network interface NamePolicy= disabled on kernel command line. Dec 2 03:31:54 localhost NetworkManager[5965]: [1764664314.7871] device (tap4a318f6a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 03:31:54 localhost NetworkManager[5965]: [1764664314.7880] device (tap4a318f6a-b3): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 03:31:54 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 2 03:31:54 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Dec 2 03:31:54 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Dec 2 03:31:54 localhost systemd-machined[84262]: New machine qemu-1-instance-00000002. Dec 2 03:31:54 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000002. Dec 2 03:31:54 localhost NetworkManager[5965]: [1764664314.9986] manager: (tap595e1c9b-70): new Veth device (/org/freedesktop/NetworkManager/Devices/14) Dec 2 03:31:55 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap595e1c9b-71: link becomes ready Dec 2 03:31:55 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap595e1c9b-70: link becomes ready Dec 2 03:31:55 localhost NetworkManager[5965]: [1764664315.0409] device (tap595e1c9b-70): carrier: link connected Dec 2 03:31:55 localhost kernel: device tap595e1c9b-70 entered promiscuous mode Dec 2 03:31:56 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 2 03:31:56 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 2 03:31:56 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Dec 2 03:31:56 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Dec 2 03:31:57 localhost podman[84443]: 2025-12-02 08:31:57.49145496 +0000 UTC m=+0.029820745 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 2 03:31:57 localhost podman[84443]: 2025-12-02 08:31:57.541297662 +0000 UTC m=+0.079663397 container create 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:31:57 localhost systemd[1]: Started libpod-conmon-7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0.scope. Dec 2 03:31:57 localhost systemd[1]: tmp-crun.d71t3a.mount: Deactivated successfully. Dec 2 03:31:57 localhost systemd[1]: Started libcrun container. Dec 2 03:31:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f61ced3f88a0be87d665800e8e7cb17559a616ee2c3a746c87a603ddb5549d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 03:31:57 localhost podman[84443]: 2025-12-02 08:31:57.609419003 +0000 UTC m=+0.147784738 container init 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 03:31:57 localhost podman[84443]: 2025-12-02 08:31:57.616208769 +0000 UTC m=+0.154574494 container start 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:31:57 localhost setroubleshoot[84359]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l c62ace7d-fc71-492d-8738-6cc52b8f8f8f Dec 2 03:31:57 localhost setroubleshoot[84359]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.#012#012***** Plugin qemu_file_image (98.8 confidence) suggests *******************#012#012If max_map_count is a virtualization target#012Then you need to change the label on max_map_count'#012Do#012# semanage fcontext -a -t virt_image_t 'max_map_count'#012# restorecon -v 'max_map_count'#012#012***** Plugin catchall (2.13 confidence) suggests **************************#012#012If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm#012# semodule -X 300 -i my-qemukvm.pp#012 Dec 2 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:32:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:32:02 localhost podman[84499]: 2025-12-02 08:32:02.460847755 +0000 UTC m=+0.096385844 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:32:02 localhost podman[84499]: 2025-12-02 08:32:02.469215704 +0000 UTC m=+0.104753793 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Dec 2 03:32:02 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:32:02 localhost podman[84514]: 2025-12-02 08:32:02.51409526 +0000 UTC m=+0.137317893 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:32:02 localhost podman[84501]: 2025-12-02 08:32:02.566900903 +0000 UTC m=+0.197083446 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 03:32:02 localhost podman[84501]: 2025-12-02 08:32:02.606735282 +0000 UTC m=+0.236917825 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:32:02 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:32:02 localhost podman[84502]: 2025-12-02 08:32:02.623034187 +0000 UTC m=+0.249978371 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 03:32:02 localhost podman[84500]: 2025-12-02 08:32:02.669103697 +0000 UTC m=+0.304519593 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_migration_target, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:32:02 localhost podman[84502]: 2025-12-02 08:32:02.689021801 +0000 UTC m=+0.315965955 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:32:02 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:32:02 localhost podman[84514]: 2025-12-02 08:32:02.744908608 +0000 UTC m=+0.368131231 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:32:02 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:32:03 localhost podman[84500]: 2025-12-02 08:32:03.05604534 +0000 UTC m=+0.691461246 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4) Dec 2 03:32:03 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:32:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:32:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:32:06 localhost systemd[1]: tmp-crun.XjWpvF.mount: Deactivated successfully. Dec 2 03:32:06 localhost podman[84619]: 2025-12-02 08:32:06.443663831 +0000 UTC m=+0.077587971 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:32:06 localhost systemd[1]: tmp-crun.wslz9a.mount: Deactivated successfully. Dec 2 03:32:06 localhost podman[84620]: 2025-12-02 08:32:06.504415292 +0000 UTC m=+0.135522905 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:32:06 localhost podman[84619]: 2025-12-02 08:32:06.513030537 +0000 UTC m=+0.146954617 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:32:06 localhost podman[84620]: 2025-12-02 08:32:06.524901081 +0000 UTC m=+0.156008664 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=ovn_controller, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:32:06 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:32:06 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:32:07 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Dec 2 03:32:07 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.012s CPU time. Dec 2 03:32:07 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 2 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:32:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:32:14 localhost podman[84667]: 2025-12-02 08:32:14.44107409 +0000 UTC m=+0.084285144 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:32:14 localhost podman[84667]: 2025-12-02 08:32:14.453921081 +0000 UTC m=+0.097132125 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:32:14 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:32:14 localhost podman[84668]: 2025-12-02 08:32:14.537951497 +0000 UTC m=+0.178661022 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, release=1761123044, name=rhosp17/openstack-iscsid) Dec 2 03:32:14 localhost podman[84668]: 2025-12-02 08:32:14.573008106 +0000 UTC m=+0.213717671 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 03:32:14 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34900 [02/Dec/2025:08:32:14.483] listener listener/metadata 0/0/0/1171/1171 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34908 [02/Dec/2025:08:32:15.733] listener listener/metadata 0/0/0/9/9 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34918 [02/Dec/2025:08:32:15.780] listener listener/metadata 0/0/0/11/11 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34924 [02/Dec/2025:08:32:15.826] listener listener/metadata 0/0/0/11/11 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34940 [02/Dec/2025:08:32:15.876] listener listener/metadata 0/0/0/12/12 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34944 [02/Dec/2025:08:32:15.927] listener listener/metadata 0/0/0/12/12 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 2 03:32:15 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34952 [02/Dec/2025:08:32:15.975] listener listener/metadata 0/0/0/12/12 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34954 [02/Dec/2025:08:32:16.030] listener listener/metadata 0/0/0/10/10 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34956 [02/Dec/2025:08:32:16.080] listener listener/metadata 0/0/0/8/8 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34962 [02/Dec/2025:08:32:16.127] listener listener/metadata 0/0/0/10/10 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34978 [02/Dec/2025:08:32:16.176] listener listener/metadata 0/0/0/12/12 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34986 [02/Dec/2025:08:32:16.216] listener listener/metadata 0/0/0/11/11 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:34990 [02/Dec/2025:08:32:16.296] listener listener/metadata 0/0/0/14/14 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:35000 [02/Dec/2025:08:32:16.363] listener listener/metadata 0/0/0/13/13 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:35002 [02/Dec/2025:08:32:16.427] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 2 03:32:16 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[84474]: 192.168.0.102:35004 [02/Dec/2025:08:32:16.492] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 2 03:32:20 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 03:32:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:32:24 localhost podman[84704]: 2025-12-02 08:32:24.450854011 +0000 UTC m=+0.094543755 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 03:32:24 localhost podman[84704]: 2025-12-02 08:32:24.648137721 +0000 UTC m=+0.291827475 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:32:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:32:24 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:32:24 localhost recover_tripleo_nova_virtqemud[84734]: 62312 Dec 2 03:32:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:32:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:32:33 localhost systemd[1]: tmp-crun.tcRnxu.mount: Deactivated successfully. Dec 2 03:32:33 localhost podman[84743]: 2025-12-02 08:32:33.477800616 +0000 UTC m=+0.101762472 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible) Dec 2 03:32:33 localhost podman[84735]: 2025-12-02 08:32:33.436802266 +0000 UTC m=+0.074641661 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, distribution-scope=public, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:32:33 localhost podman[84735]: 2025-12-02 08:32:33.519995519 +0000 UTC m=+0.157834914 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:32:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:32:33 localhost podman[84736]: 2025-12-02 08:32:33.535794131 +0000 UTC m=+0.168713211 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:32:33 localhost podman[84743]: 2025-12-02 08:32:33.56136664 +0000 UTC m=+0.185328466 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:32:33 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:32:33 localhost podman[84737]: 2025-12-02 08:32:33.638357874 +0000 UTC m=+0.270540724 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:32:33 localhost podman[84737]: 2025-12-02 08:32:33.66495754 +0000 UTC m=+0.297140400 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-type=git, version=17.1.12, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5) Dec 2 03:32:33 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:32:33 localhost podman[84755]: 2025-12-02 08:32:33.677982757 +0000 UTC m=+0.297451100 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12) Dec 2 03:32:33 localhost podman[84755]: 2025-12-02 08:32:33.701230742 +0000 UTC m=+0.320699125 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:32:33 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:32:33 localhost podman[84736]: 2025-12-02 08:32:33.898299587 +0000 UTC m=+0.531218627 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true) Dec 2 03:32:33 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:32:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:32:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:32:37 localhost systemd[1]: tmp-crun.ODducB.mount: Deactivated successfully. Dec 2 03:32:37 localhost podman[84858]: 2025-12-02 08:32:37.4521149 +0000 UTC m=+0.090655948 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:32:37 localhost systemd[1]: tmp-crun.L0A4wW.mount: Deactivated successfully. Dec 2 03:32:37 localhost podman[84859]: 2025-12-02 08:32:37.503307248 +0000 UTC m=+0.139017369 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:32:37 localhost podman[84858]: 2025-12-02 08:32:37.522935415 +0000 UTC m=+0.161476513 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, version=17.1.12, container_name=ovn_metadata_agent, batch=17.1_20251118.1) Dec 2 03:32:37 localhost podman[84859]: 2025-12-02 08:32:37.530899713 +0000 UTC m=+0.166609794 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public) Dec 2 03:32:37 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:32:37 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:32:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:32:45 localhost podman[84906]: 2025-12-02 08:32:45.44364078 +0000 UTC m=+0.077836828 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, container_name=iscsid, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12) Dec 2 03:32:45 localhost podman[84906]: 2025-12-02 08:32:45.479034526 +0000 UTC m=+0.113230554 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com) Dec 2 03:32:45 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:32:45 localhost podman[84905]: 2025-12-02 08:32:45.491793805 +0000 UTC m=+0.127959068 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:32:45 localhost podman[84905]: 2025-12-02 08:32:45.530176305 +0000 UTC m=+0.166341628 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:32:45 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:32:48 localhost snmpd[69635]: empty variable list in _query Dec 2 03:32:48 localhost snmpd[69635]: empty variable list in _query Dec 2 03:32:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:32:55 localhost systemd[1]: tmp-crun.0UiMFA.mount: Deactivated successfully. Dec 2 03:32:55 localhost podman[84990]: 2025-12-02 08:32:55.417760326 +0000 UTC m=+0.062044207 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1) Dec 2 03:32:55 localhost podman[84990]: 2025-12-02 08:32:55.572043942 +0000 UTC m=+0.216327823 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:32:55 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:33:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:33:04 localhost podman[85099]: 2025-12-02 08:33:04.457757977 +0000 UTC m=+0.085396154 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 2 03:33:04 localhost systemd[1]: tmp-crun.HjzqDL.mount: Deactivated successfully. Dec 2 03:33:04 localhost podman[85096]: 2025-12-02 08:33:04.507200538 +0000 UTC m=+0.138232419 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 03:33:04 localhost podman[85098]: 2025-12-02 08:33:04.557820521 +0000 UTC m=+0.184977735 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4) Dec 2 03:33:04 localhost podman[85098]: 2025-12-02 08:33:04.613976836 +0000 UTC m=+0.241134010 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 03:33:04 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:33:04 localhost podman[85099]: 2025-12-02 08:33:04.630365594 +0000 UTC m=+0.258003841 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 2 03:33:04 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:33:04 localhost podman[85095]: 2025-12-02 08:33:04.617088541 +0000 UTC m=+0.247246587 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team) Dec 2 03:33:04 localhost podman[85095]: 2025-12-02 08:33:04.702990928 +0000 UTC m=+0.333148934 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond) Dec 2 03:33:04 localhost podman[85097]: 2025-12-02 08:33:04.711516911 +0000 UTC m=+0.338979874 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public) Dec 2 03:33:04 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:33:04 localhost podman[85097]: 2025-12-02 08:33:04.76932491 +0000 UTC m=+0.396787853 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:33:04 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:33:04 localhost podman[85096]: 2025-12-02 08:33:04.889092944 +0000 UTC m=+0.520124795 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z) Dec 2 03:33:04 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:33:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:33:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:33:08 localhost podman[85218]: 2025-12-02 08:33:08.457337803 +0000 UTC m=+0.091016169 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 2 03:33:08 localhost systemd[1]: tmp-crun.Y5q4x2.mount: Deactivated successfully. Dec 2 03:33:08 localhost podman[85219]: 2025-12-02 08:33:08.518268497 +0000 UTC m=+0.148481798 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 2 03:33:08 localhost podman[85218]: 2025-12-02 08:33:08.534185382 +0000 UTC m=+0.167863728 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1) Dec 2 03:33:08 localhost podman[85219]: 2025-12-02 08:33:08.542998773 +0000 UTC m=+0.173212084 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:33:08 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:33:08 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:33:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:33:16 localhost podman[85269]: 2025-12-02 08:33:16.460663435 +0000 UTC m=+0.090095634 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 03:33:16 localhost podman[85269]: 2025-12-02 08:33:16.500120363 +0000 UTC m=+0.129552552 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, version=17.1.12) Dec 2 03:33:16 localhost systemd[1]: tmp-crun.dftgzn.mount: Deactivated successfully. Dec 2 03:33:16 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:33:16 localhost podman[85268]: 2025-12-02 08:33:16.520381816 +0000 UTC m=+0.152262901 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container) Dec 2 03:33:16 localhost podman[85268]: 2025-12-02 08:33:16.53298551 +0000 UTC m=+0.164866665 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container) Dec 2 03:33:16 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:33:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:33:26 localhost podman[85306]: 2025-12-02 08:33:26.454345847 +0000 UTC m=+0.093219498 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 2 03:33:26 localhost podman[85306]: 2025-12-02 08:33:26.651108643 +0000 UTC m=+0.289982294 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true) Dec 2 03:33:26 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:33:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:33:35 localhost podman[85336]: 2025-12-02 08:33:35.44611011 +0000 UTC m=+0.088747446 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 2 03:33:35 localhost podman[85336]: 2025-12-02 08:33:35.454937451 +0000 UTC m=+0.097574787 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:33:35 localhost systemd[1]: tmp-crun.XGUMgP.mount: Deactivated successfully. Dec 2 03:33:35 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:33:35 localhost podman[85338]: 2025-12-02 08:33:35.493468034 +0000 UTC m=+0.126186469 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute) Dec 2 03:33:35 localhost podman[85345]: 2025-12-02 08:33:35.467179105 +0000 UTC m=+0.097024831 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:33:35 localhost podman[85344]: 2025-12-02 08:33:35.522428645 +0000 UTC m=+0.154226395 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public) Dec 2 03:33:35 localhost podman[85345]: 2025-12-02 08:33:35.547824269 +0000 UTC m=+0.177669995 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:33:35 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:33:35 localhost podman[85344]: 2025-12-02 08:33:35.572883304 +0000 UTC m=+0.204680974 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.12, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute) Dec 2 03:33:35 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:33:35 localhost podman[85338]: 2025-12-02 08:33:35.62322195 +0000 UTC m=+0.255940395 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:33:35 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:33:35 localhost podman[85337]: 2025-12-02 08:33:35.704912042 +0000 UTC m=+0.343359143 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:36:58Z, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 2 03:33:36 localhost podman[85337]: 2025-12-02 08:33:36.128925209 +0000 UTC m=+0.767372290 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:33:36 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:33:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:33:39 localhost systemd[1]: tmp-crun.DbO9s4.mount: Deactivated successfully. Dec 2 03:33:39 localhost podman[85450]: 2025-12-02 08:33:39.466637047 +0000 UTC m=+0.106247025 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc.) Dec 2 03:33:39 localhost podman[85450]: 2025-12-02 08:33:39.496991706 +0000 UTC m=+0.136601654 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 03:33:39 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:33:39 localhost podman[85449]: 2025-12-02 08:33:39.542744646 +0000 UTC m=+0.186095526 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64) Dec 2 03:33:39 localhost podman[85449]: 2025-12-02 08:33:39.585061662 +0000 UTC m=+0.228412592 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:33:39 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:33:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:33:47 localhost podman[85497]: 2025-12-02 08:33:47.450247728 +0000 UTC m=+0.086725451 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:33:47 localhost podman[85497]: 2025-12-02 08:33:47.46059297 +0000 UTC m=+0.097070753 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=) Dec 2 03:33:47 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:33:47 localhost systemd[1]: tmp-crun.YhGCvL.mount: Deactivated successfully. Dec 2 03:33:47 localhost podman[85498]: 2025-12-02 08:33:47.555532574 +0000 UTC m=+0.188118211 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:33:47 localhost podman[85498]: 2025-12-02 08:33:47.564683775 +0000 UTC m=+0.197269392 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:33:47 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:33:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:33:56 localhost recover_tripleo_nova_virtqemud[85582]: 62312 Dec 2 03:33:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:33:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:33:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:33:57 localhost podman[85583]: 2025-12-02 08:33:57.429921473 +0000 UTC m=+0.066240930 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=) Dec 2 03:33:57 localhost podman[85583]: 2025-12-02 08:33:57.616186964 +0000 UTC m=+0.252506431 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc.) Dec 2 03:33:57 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:34:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:34:06 localhost podman[85689]: 2025-12-02 08:34:06.470660815 +0000 UTC m=+0.103420667 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, release=1761123044, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team) Dec 2 03:34:06 localhost podman[85689]: 2025-12-02 08:34:06.505129937 +0000 UTC m=+0.137889799 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:34:06 localhost podman[85691]: 2025-12-02 08:34:06.517478884 +0000 UTC m=+0.145250560 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4) Dec 2 03:34:06 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:34:06 localhost podman[85691]: 2025-12-02 08:34:06.56529676 +0000 UTC m=+0.193068466 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:34:06 localhost podman[85692]: 2025-12-02 08:34:06.565363022 +0000 UTC m=+0.187687700 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:34:06 localhost podman[85692]: 2025-12-02 08:34:06.615273466 +0000 UTC m=+0.237598114 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:34:06 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:34:06 localhost podman[85690]: 2025-12-02 08:34:06.633922046 +0000 UTC m=+0.265075185 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:34:07 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:34:07 localhost podman[85690]: 2025-12-02 08:34:07.040140437 +0000 UTC m=+0.671293576 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:34:07 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:34:07 localhost podman[85704]: 2025-12-02 08:34:07.070159937 +0000 UTC m=+0.689760990 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:34:07 localhost podman[85704]: 2025-12-02 08:34:07.088493768 +0000 UTC m=+0.708094881 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12) Dec 2 03:34:07 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:34:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:34:10 localhost podman[85809]: 2025-12-02 08:34:10.452731329 +0000 UTC m=+0.092864879 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:34:10 localhost podman[85810]: 2025-12-02 08:34:10.509046368 +0000 UTC m=+0.148054077 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 2 03:34:10 localhost podman[85810]: 2025-12-02 08:34:10.530443343 +0000 UTC m=+0.169451022 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, release=1761123044, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:34:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:34:10 localhost podman[85809]: 2025-12-02 08:34:10.582600137 +0000 UTC m=+0.222733687 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:34:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:34:18 localhost podman[85857]: 2025-12-02 08:34:18.498922762 +0000 UTC m=+0.130064045 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:34:18 localhost podman[85857]: 2025-12-02 08:34:18.50871304 +0000 UTC m=+0.139854303 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid) Dec 2 03:34:18 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:34:18 localhost podman[85856]: 2025-12-02 08:34:18.607925311 +0000 UTC m=+0.239603098 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:34:18 localhost podman[85856]: 2025-12-02 08:34:18.618203352 +0000 UTC m=+0.249881159 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.expose-services=) Dec 2 03:34:18 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:34:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:34:28 localhost systemd[1]: tmp-crun.rg8n1J.mount: Deactivated successfully. Dec 2 03:34:28 localhost podman[85896]: 2025-12-02 08:34:28.465747279 +0000 UTC m=+0.109948776 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:34:28 localhost podman[85896]: 2025-12-02 08:34:28.662513375 +0000 UTC m=+0.306714852 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:34:28 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:34:37 localhost podman[85925]: 2025-12-02 08:34:37.534691669 +0000 UTC m=+0.167468587 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:34:37 localhost podman[85925]: 2025-12-02 08:34:37.56509011 +0000 UTC m=+0.197867038 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=logrotate_crond, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:34:37 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:34:37 localhost podman[85927]: 2025-12-02 08:34:37.61888102 +0000 UTC m=+0.244680797 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step5, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:34:37 localhost podman[85926]: 2025-12-02 08:34:37.671140528 +0000 UTC m=+0.302652682 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:34:37 localhost podman[85927]: 2025-12-02 08:34:37.691661698 +0000 UTC m=+0.317461415 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.) Dec 2 03:34:37 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:34:37 localhost podman[85939]: 2025-12-02 08:34:37.781834613 +0000 UTC m=+0.402512810 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public) Dec 2 03:34:37 localhost podman[85939]: 2025-12-02 08:34:37.806896067 +0000 UTC m=+0.427574334 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 03:34:37 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:34:37 localhost podman[85928]: 2025-12-02 08:34:37.822251807 +0000 UTC m=+0.445355081 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:34:37 localhost podman[85928]: 2025-12-02 08:34:37.874118244 +0000 UTC m=+0.497221508 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Dec 2 03:34:37 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:34:38 localhost podman[85926]: 2025-12-02 08:34:38.038238499 +0000 UTC m=+0.669750613 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:34:38 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:34:38 localhost systemd[1]: tmp-crun.hbPxII.mount: Deactivated successfully. Dec 2 03:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:34:41 localhost systemd[1]: tmp-crun.PzMkJN.mount: Deactivated successfully. Dec 2 03:34:41 localhost podman[86043]: 2025-12-02 08:34:41.474766387 +0000 UTC m=+0.083759340 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:34:41 localhost podman[86043]: 2025-12-02 08:34:41.490934109 +0000 UTC m=+0.099927022 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:34:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:34:41 localhost podman[86042]: 2025-12-02 08:34:41.577502723 +0000 UTC m=+0.190684781 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:34:41 localhost podman[86042]: 2025-12-02 08:34:41.615360618 +0000 UTC m=+0.228542616 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:34:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:34:42 localhost systemd[1]: tmp-crun.9QhACE.mount: Deactivated successfully. Dec 2 03:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:34:49 localhost systemd[1]: tmp-crun.bkOuTZ.mount: Deactivated successfully. Dec 2 03:34:49 localhost systemd[1]: tmp-crun.AMeEHK.mount: Deactivated successfully. Dec 2 03:34:49 localhost podman[86094]: 2025-12-02 08:34:49.496293385 +0000 UTC m=+0.134943878 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:44:13Z, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com) Dec 2 03:34:49 localhost podman[86093]: 2025-12-02 08:34:49.461932357 +0000 UTC m=+0.104769974 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:51:28Z, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3) Dec 2 03:34:49 localhost podman[86094]: 2025-12-02 08:34:49.530098469 +0000 UTC m=+0.168748972 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible) Dec 2 03:34:49 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:34:49 localhost podman[86093]: 2025-12-02 08:34:49.543547837 +0000 UTC m=+0.186385454 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:34:49 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:34:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:34:59 localhost systemd[1]: tmp-crun.r9eXfC.mount: Deactivated successfully. Dec 2 03:34:59 localhost podman[86179]: 2025-12-02 08:34:59.435531722 +0000 UTC m=+0.080354126 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 2 03:34:59 localhost podman[86179]: 2025-12-02 08:34:59.637187793 +0000 UTC m=+0.282010237 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, distribution-scope=public) Dec 2 03:34:59 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:35:08 localhost podman[86286]: 2025-12-02 08:35:08.458182337 +0000 UTC m=+0.088118069 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:35:08 localhost systemd[1]: tmp-crun.0eowMB.mount: Deactivated successfully. Dec 2 03:35:08 localhost podman[86298]: 2025-12-02 08:35:08.510013322 +0000 UTC m=+0.133473577 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:35:08 localhost podman[86298]: 2025-12-02 08:35:08.542909302 +0000 UTC m=+0.166369577 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 03:35:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:35:08 localhost podman[86287]: 2025-12-02 08:35:08.561009967 +0000 UTC m=+0.190151078 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12) Dec 2 03:35:08 localhost podman[86285]: 2025-12-02 08:35:08.609052989 +0000 UTC m=+0.241472489 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, vcs-type=git, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron) Dec 2 03:35:08 localhost podman[86285]: 2025-12-02 08:35:08.618772864 +0000 UTC m=+0.251192284 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 2 03:35:08 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:35:08 localhost podman[86288]: 2025-12-02 08:35:08.661911804 +0000 UTC m=+0.286982854 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 03:35:08 localhost podman[86287]: 2025-12-02 08:35:08.662918901 +0000 UTC m=+0.292060012 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5) Dec 2 03:35:08 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:35:08 localhost podman[86288]: 2025-12-02 08:35:08.745101576 +0000 UTC m=+0.370172617 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute) Dec 2 03:35:08 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:35:08 localhost podman[86286]: 2025-12-02 08:35:08.874076721 +0000 UTC m=+0.504012423 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 2 03:35:08 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:35:09 localhost systemd[1]: tmp-crun.E6E5zX.mount: Deactivated successfully. Dec 2 03:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:35:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:35:12 localhost systemd[1]: tmp-crun.9iJfQN.mount: Deactivated successfully. Dec 2 03:35:12 localhost podman[86405]: 2025-12-02 08:35:12.46173889 +0000 UTC m=+0.092249852 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:35:12 localhost podman[86405]: 2025-12-02 08:35:12.488029878 +0000 UTC m=+0.118540890 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:35:12 localhost systemd[1]: tmp-crun.dkyVEY.mount: Deactivated successfully. Dec 2 03:35:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:35:12 localhost podman[86404]: 2025-12-02 08:35:12.503556822 +0000 UTC m=+0.138573817 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 2 03:35:12 localhost podman[86404]: 2025-12-02 08:35:12.545905779 +0000 UTC m=+0.180922764 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=) Dec 2 03:35:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:35:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:35:20 localhost systemd[1]: tmp-crun.nHRQ1X.mount: Deactivated successfully. Dec 2 03:35:20 localhost podman[86451]: 2025-12-02 08:35:20.466712587 +0000 UTC m=+0.090127575 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 2 03:35:20 localhost podman[86451]: 2025-12-02 08:35:20.507125721 +0000 UTC m=+0.130540729 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:35:20 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:35:20 localhost podman[86452]: 2025-12-02 08:35:20.51806512 +0000 UTC m=+0.136679586 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:35:20 localhost podman[86452]: 2025-12-02 08:35:20.606132486 +0000 UTC m=+0.224746932 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, tcib_managed=true) Dec 2 03:35:20 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:35:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:35:30 localhost podman[86490]: 2025-12-02 08:35:30.437048109 +0000 UTC m=+0.081073827 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z) Dec 2 03:35:30 localhost podman[86490]: 2025-12-02 08:35:30.656071054 +0000 UTC m=+0.300096792 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:35:30 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:35:39 localhost systemd[1]: tmp-crun.eo1gh5.mount: Deactivated successfully. Dec 2 03:35:39 localhost podman[86520]: 2025-12-02 08:35:39.487501784 +0000 UTC m=+0.123527977 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:35:39 localhost podman[86519]: 2025-12-02 08:35:39.431228476 +0000 UTC m=+0.073524160 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container) Dec 2 03:35:39 localhost podman[86528]: 2025-12-02 08:35:39.463440677 +0000 UTC m=+0.091468201 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:35:39 localhost podman[86519]: 2025-12-02 08:35:39.532892105 +0000 UTC m=+0.175187759 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:35:39 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:35:39 localhost podman[86533]: 2025-12-02 08:35:39.574241354 +0000 UTC m=+0.194337041 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:35:39 localhost podman[86528]: 2025-12-02 08:35:39.59786444 +0000 UTC m=+0.225891984 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Dec 2 03:35:39 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:35:39 localhost podman[86526]: 2025-12-02 08:35:39.547984577 +0000 UTC m=+0.178673724 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:35:39 localhost podman[86533]: 2025-12-02 08:35:39.656856672 +0000 UTC m=+0.276952379 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:35:39 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:35:39 localhost podman[86526]: 2025-12-02 08:35:39.677530897 +0000 UTC m=+0.308220054 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 2 03:35:39 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:35:39 localhost podman[86520]: 2025-12-02 08:35:39.851953733 +0000 UTC m=+0.487979986 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.12, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 03:35:39 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:35:43 localhost podman[86642]: 2025-12-02 08:35:43.428382933 +0000 UTC m=+0.064172255 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64) Dec 2 03:35:43 localhost podman[86642]: 2025-12-02 08:35:43.456986985 +0000 UTC m=+0.092776317 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, release=1761123044, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 03:35:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:35:43 localhost systemd[1]: tmp-crun.qYgwrx.mount: Deactivated successfully. Dec 2 03:35:43 localhost podman[86641]: 2025-12-02 08:35:43.548894926 +0000 UTC m=+0.183909556 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:35:43 localhost podman[86641]: 2025-12-02 08:35:43.58707882 +0000 UTC m=+0.222093500 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 03:35:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:35:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:35:51 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:35:51 localhost recover_tripleo_nova_virtqemud[86697]: 62312 Dec 2 03:35:51 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:35:51 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:35:51 localhost systemd[1]: tmp-crun.9YrZLb.mount: Deactivated successfully. Dec 2 03:35:51 localhost podman[86690]: 2025-12-02 08:35:51.438707997 +0000 UTC m=+0.083313658 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:35:51 localhost systemd[1]: tmp-crun.qlfYd8.mount: Deactivated successfully. Dec 2 03:35:51 localhost podman[86689]: 2025-12-02 08:35:51.473742334 +0000 UTC m=+0.122049687 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:35:51 localhost podman[86689]: 2025-12-02 08:35:51.482075942 +0000 UTC m=+0.130383295 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd) Dec 2 03:35:51 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:35:51 localhost podman[86690]: 2025-12-02 08:35:51.498081169 +0000 UTC m=+0.142686810 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible) Dec 2 03:35:51 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:36:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:36:01 localhost systemd[1]: tmp-crun.XMMfST.mount: Deactivated successfully. Dec 2 03:36:01 localhost podman[86775]: 2025-12-02 08:36:01.446546063 +0000 UTC m=+0.079651857 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Dec 2 03:36:01 localhost podman[86775]: 2025-12-02 08:36:01.64588971 +0000 UTC m=+0.278995454 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64) Dec 2 03:36:01 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:36:10 localhost podman[86880]: 2025-12-02 08:36:10.436749051 +0000 UTC m=+0.073859269 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, release=1761123044, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:36:10 localhost systemd[1]: tmp-crun.umNOmG.mount: Deactivated successfully. Dec 2 03:36:10 localhost podman[86894]: 2025-12-02 08:36:10.508344698 +0000 UTC m=+0.131861324 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 2 03:36:10 localhost podman[86882]: 2025-12-02 08:36:10.464965922 +0000 UTC m=+0.093252089 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 03:36:10 localhost podman[86894]: 2025-12-02 08:36:10.529904919 +0000 UTC m=+0.153421555 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com) Dec 2 03:36:10 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:36:10 localhost podman[86881]: 2025-12-02 08:36:10.490746316 +0000 UTC m=+0.126095267 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:36:10 localhost podman[86882]: 2025-12-02 08:36:10.546969069 +0000 UTC m=+0.175255206 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 2 03:36:10 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:36:10 localhost podman[86888]: 2025-12-02 08:36:10.587576771 +0000 UTC m=+0.216145186 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.expose-services=, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true) Dec 2 03:36:10 localhost podman[86888]: 2025-12-02 08:36:10.612067128 +0000 UTC m=+0.240635533 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:36:10 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:36:10 localhost podman[86880]: 2025-12-02 08:36:10.623215929 +0000 UTC m=+0.260326157 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:36:10 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:36:10 localhost podman[86881]: 2025-12-02 08:36:10.857127407 +0000 UTC m=+0.492476318 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:36:10 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:36:14 localhost podman[87002]: 2025-12-02 08:36:14.408249633 +0000 UTC m=+0.052455662 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 03:36:14 localhost podman[87002]: 2025-12-02 08:36:14.454963099 +0000 UTC m=+0.099169148 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 03:36:14 localhost systemd[1]: tmp-crun.T9HKY7.mount: Deactivated successfully. Dec 2 03:36:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:36:14 localhost podman[87003]: 2025-12-02 08:36:14.471947097 +0000 UTC m=+0.114821662 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1) Dec 2 03:36:14 localhost podman[87003]: 2025-12-02 08:36:14.491984081 +0000 UTC m=+0.134858676 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:36:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:36:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:36:22 localhost podman[87050]: 2025-12-02 08:36:22.426128074 +0000 UTC m=+0.068089145 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:36:22 localhost podman[87050]: 2025-12-02 08:36:22.464046009 +0000 UTC m=+0.106007000 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd) Dec 2 03:36:22 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:36:22 localhost podman[87051]: 2025-12-02 08:36:22.531297202 +0000 UTC m=+0.168887944 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, container_name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:36:22 localhost podman[87051]: 2025-12-02 08:36:22.541882628 +0000 UTC m=+0.179473370 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:36:22 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:36:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:36:32 localhost podman[87088]: 2025-12-02 08:36:32.435482733 +0000 UTC m=+0.080254182 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=metrics_qdr, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:36:32 localhost podman[87088]: 2025-12-02 08:36:32.630923654 +0000 UTC m=+0.275695033 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:36:32 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:36:41 localhost podman[87118]: 2025-12-02 08:36:41.450773984 +0000 UTC m=+0.085861022 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container) Dec 2 03:36:41 localhost systemd[1]: tmp-crun.T3lGnH.mount: Deactivated successfully. Dec 2 03:36:41 localhost podman[87121]: 2025-12-02 08:36:41.512451597 +0000 UTC m=+0.142051986 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 03:36:41 localhost podman[87119]: 2025-12-02 08:36:41.548065874 +0000 UTC m=+0.181779498 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute) Dec 2 03:36:41 localhost podman[87121]: 2025-12-02 08:36:41.562001035 +0000 UTC m=+0.191601424 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:36:41 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:36:41 localhost podman[87117]: 2025-12-02 08:36:41.602636717 +0000 UTC m=+0.238918995 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:36:41 localhost podman[87119]: 2025-12-02 08:36:41.616944429 +0000 UTC m=+0.250658063 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 2 03:36:41 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:36:41 localhost podman[87120]: 2025-12-02 08:36:41.665006678 +0000 UTC m=+0.295263455 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 2 03:36:41 localhost podman[87117]: 2025-12-02 08:36:41.692463929 +0000 UTC m=+0.328746287 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container) Dec 2 03:36:41 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:36:41 localhost podman[87120]: 2025-12-02 08:36:41.71711956 +0000 UTC m=+0.347376337 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 03:36:41 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:36:41 localhost podman[87118]: 2025-12-02 08:36:41.770372131 +0000 UTC m=+0.405459199 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4) Dec 2 03:36:41 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:36:45 localhost systemd[1]: tmp-crun.kZjPd7.mount: Deactivated successfully. Dec 2 03:36:45 localhost podman[87237]: 2025-12-02 08:36:45.425644108 +0000 UTC m=+0.067104591 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:36:45 localhost podman[87237]: 2025-12-02 08:36:45.447948669 +0000 UTC m=+0.089409142 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ovn_controller, release=1761123044) Dec 2 03:36:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:36:45 localhost podman[87236]: 2025-12-02 08:36:45.531483113 +0000 UTC m=+0.174601537 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:36:45 localhost podman[87236]: 2025-12-02 08:36:45.587006911 +0000 UTC m=+0.230125295 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:36:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:36:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:36:53 localhost systemd[1]: tmp-crun.7R4Tdc.mount: Deactivated successfully. Dec 2 03:36:53 localhost podman[87308]: 2025-12-02 08:36:53.447400726 +0000 UTC m=+0.091714510 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com) Dec 2 03:36:53 localhost podman[87308]: 2025-12-02 08:36:53.481430733 +0000 UTC m=+0.125744517 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, container_name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1761123044) Dec 2 03:36:53 localhost podman[87309]: 2025-12-02 08:36:53.493290581 +0000 UTC m=+0.134024025 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, distribution-scope=public) Dec 2 03:36:53 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:36:53 localhost podman[87309]: 2025-12-02 08:36:53.502446092 +0000 UTC m=+0.143179526 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:36:53 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:36:54 localhost systemd[1]: tmp-crun.GQBwYt.mount: Deactivated successfully. Dec 2 03:37:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:37:03 localhost systemd[84191]: Created slice User Background Tasks Slice. Dec 2 03:37:03 localhost podman[87368]: 2025-12-02 08:37:03.437749687 +0000 UTC m=+0.077893483 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:37:03 localhost systemd[84191]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 03:37:03 localhost systemd[84191]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 03:37:03 localhost podman[87368]: 2025-12-02 08:37:03.662032633 +0000 UTC m=+0.302176379 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:37:03 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:37:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:37:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 341 writes, 1463 keys, 341 commit groups, 1.0 writes per commit group, ingest: 1.82 MB, 0.00 MB/s#012Interval WAL: 341 writes, 122 syncs, 2.80 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:37:12 localhost podman[87474]: 2025-12-02 08:37:12.467119204 +0000 UTC m=+0.102565934 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, architecture=x86_64, container_name=logrotate_crond, io.openshift.expose-services=) Dec 2 03:37:12 localhost podman[87474]: 2025-12-02 08:37:12.476962792 +0000 UTC m=+0.112409552 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=logrotate_crond, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container) Dec 2 03:37:12 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:37:12 localhost podman[87485]: 2025-12-02 08:37:12.51503942 +0000 UTC m=+0.137518193 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:37:12 localhost systemd[1]: tmp-crun.ll5dEb.mount: Deactivated successfully. Dec 2 03:37:12 localhost podman[87485]: 2025-12-02 08:37:12.565248745 +0000 UTC m=+0.187727508 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:37:12 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:37:12 localhost podman[87476]: 2025-12-02 08:37:12.5650647 +0000 UTC m=+0.195760700 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:37:12 localhost podman[87475]: 2025-12-02 08:37:12.622090065 +0000 UTC m=+0.255285268 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 2 03:37:12 localhost podman[87477]: 2025-12-02 08:37:12.681993314 +0000 UTC m=+0.306471197 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team) Dec 2 03:37:12 localhost podman[87476]: 2025-12-02 08:37:12.700325345 +0000 UTC m=+0.331021415 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:37:12 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:37:12 localhost podman[87477]: 2025-12-02 08:37:12.737514722 +0000 UTC m=+0.361992595 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:37:12 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:37:12 localhost podman[87475]: 2025-12-02 08:37:12.994146673 +0000 UTC m=+0.627341916 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:37:13 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:37:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:37:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 546 writes, 2239 keys, 546 commit groups, 1.0 writes per commit group, ingest: 2.75 MB, 0.00 MB/s#012Interval WAL: 546 writes, 172 syncs, 3.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:37:16 localhost systemd[1]: tmp-crun.lxwVjc.mount: Deactivated successfully. Dec 2 03:37:16 localhost podman[87596]: 2025-12-02 08:37:16.459947958 +0000 UTC m=+0.098937001 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true) Dec 2 03:37:16 localhost podman[87596]: 2025-12-02 08:37:16.503253959 +0000 UTC m=+0.142242982 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public) Dec 2 03:37:16 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:37:16 localhost systemd[1]: tmp-crun.WWD2yb.mount: Deactivated successfully. Dec 2 03:37:16 localhost podman[87597]: 2025-12-02 08:37:16.55651127 +0000 UTC m=+0.196807776 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Dec 2 03:37:16 localhost podman[87597]: 2025-12-02 08:37:16.578696948 +0000 UTC m=+0.218993454 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:37:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:37:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:37:24 localhost podman[87644]: 2025-12-02 08:37:24.450791989 +0000 UTC m=+0.087507703 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Dec 2 03:37:24 localhost podman[87644]: 2025-12-02 08:37:24.489157385 +0000 UTC m=+0.125873109 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd) Dec 2 03:37:24 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:37:24 localhost podman[87645]: 2025-12-02 08:37:24.506632895 +0000 UTC m=+0.138960659 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:37:24 localhost podman[87645]: 2025-12-02 08:37:24.518048553 +0000 UTC m=+0.150376357 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Dec 2 03:37:24 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:37:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:37:34 localhost systemd[1]: tmp-crun.aTFu4L.mount: Deactivated successfully. Dec 2 03:37:34 localhost podman[87684]: 2025-12-02 08:37:34.455146381 +0000 UTC m=+0.097478595 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 03:37:34 localhost podman[87684]: 2025-12-02 08:37:34.65727012 +0000 UTC m=+0.299602284 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:37:34 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:37:36 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:37:36 localhost recover_tripleo_nova_virtqemud[87714]: 62312 Dec 2 03:37:36 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:37:36 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:37:43 localhost podman[87716]: 2025-12-02 08:37:43.458794359 +0000 UTC m=+0.093515665 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=) Dec 2 03:37:43 localhost podman[87727]: 2025-12-02 08:37:43.486363993 +0000 UTC m=+0.107202329 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:37:43 localhost podman[87715]: 2025-12-02 08:37:43.509844854 +0000 UTC m=+0.146326715 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:37:43 localhost podman[87715]: 2025-12-02 08:37:43.518114572 +0000 UTC m=+0.154596433 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc.) Dec 2 03:37:43 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:37:43 localhost podman[87727]: 2025-12-02 08:37:43.573454386 +0000 UTC m=+0.194292722 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, release=1761123044, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z) Dec 2 03:37:43 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:37:43 localhost podman[87717]: 2025-12-02 08:37:43.6526485 +0000 UTC m=+0.285344746 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, release=1761123044, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 2 03:37:43 localhost podman[87717]: 2025-12-02 08:37:43.70388926 +0000 UTC m=+0.336585526 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:37:43 localhost podman[87718]: 2025-12-02 08:37:43.714746223 +0000 UTC m=+0.340481543 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 03:37:43 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:37:43 localhost podman[87718]: 2025-12-02 08:37:43.744514582 +0000 UTC m=+0.370249822 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Dec 2 03:37:43 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:37:43 localhost podman[87716]: 2025-12-02 08:37:43.816074825 +0000 UTC m=+0.450796121 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12) Dec 2 03:37:43 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:37:44 localhost systemd[1]: tmp-crun.WcFyCn.mount: Deactivated successfully. Dec 2 03:37:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:37:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:37:47 localhost systemd[1]: tmp-crun.6COnvA.mount: Deactivated successfully. Dec 2 03:37:47 localhost podman[87836]: 2025-12-02 08:37:47.447746697 +0000 UTC m=+0.086757376 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:37:47 localhost podman[87837]: 2025-12-02 08:37:47.499231382 +0000 UTC m=+0.136470096 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044) Dec 2 03:37:47 localhost podman[87836]: 2025-12-02 08:37:47.508014393 +0000 UTC m=+0.147025012 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 2 03:37:47 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:37:47 localhost podman[87837]: 2025-12-02 08:37:47.525224528 +0000 UTC m=+0.162463202 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:37:47 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:37:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:37:55 localhost systemd[1]: tmp-crun.wyXobx.mount: Deactivated successfully. Dec 2 03:37:55 localhost podman[87932]: 2025-12-02 08:37:55.453587373 +0000 UTC m=+0.099781193 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, version=17.1.12, vcs-type=git, com.redhat.component=openstack-collectd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:37:55 localhost podman[87932]: 2025-12-02 08:37:55.491032155 +0000 UTC m=+0.137226005 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:37:55 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:37:55 localhost podman[87933]: 2025-12-02 08:37:55.540221204 +0000 UTC m=+0.179076189 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12) Dec 2 03:37:55 localhost podman[87933]: 2025-12-02 08:37:55.577075722 +0000 UTC m=+0.215930677 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:37:55 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:38:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:38:05 localhost systemd[1]: tmp-crun.7TSWqh.mount: Deactivated successfully. Dec 2 03:38:05 localhost podman[87970]: 2025-12-02 08:38:05.435786856 +0000 UTC m=+0.079630305 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:38:05 localhost podman[87970]: 2025-12-02 08:38:05.600437862 +0000 UTC m=+0.244281261 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:38:05 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:38:14 localhost systemd[1]: tmp-crun.6dGbAt.mount: Deactivated successfully. Dec 2 03:38:14 localhost podman[88128]: 2025-12-02 08:38:14.462749084 +0000 UTC m=+0.093408412 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 2 03:38:14 localhost podman[88129]: 2025-12-02 08:38:14.522093519 +0000 UTC m=+0.150173792 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git) Dec 2 03:38:14 localhost podman[88132]: 2025-12-02 08:38:14.566882287 +0000 UTC m=+0.186917737 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=) Dec 2 03:38:14 localhost podman[88130]: 2025-12-02 08:38:14.609848448 +0000 UTC m=+0.234979517 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:38:14 localhost podman[88132]: 2025-12-02 08:38:14.621985194 +0000 UTC m=+0.242020634 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:38:14 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:38:14 localhost podman[88130]: 2025-12-02 08:38:14.647490616 +0000 UTC m=+0.272621705 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:38:14 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:38:14 localhost podman[88129]: 2025-12-02 08:38:14.686399095 +0000 UTC m=+0.314479408 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container) Dec 2 03:38:14 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:38:14 localhost podman[88127]: 2025-12-02 08:38:14.726091224 +0000 UTC m=+0.356827434 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, architecture=x86_64) Dec 2 03:38:14 localhost podman[88127]: 2025-12-02 08:38:14.734118077 +0000 UTC m=+0.364854247 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:38:14 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:38:14 localhost podman[88128]: 2025-12-02 08:38:14.808167422 +0000 UTC m=+0.438826780 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 2 03:38:14 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:38:15 localhost systemd[1]: tmp-crun.8wy6R5.mount: Deactivated successfully. Dec 2 03:38:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:38:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:38:18 localhost systemd[1]: tmp-crun.L3XbbZ.mount: Deactivated successfully. Dec 2 03:38:18 localhost podman[88246]: 2025-12-02 08:38:18.448515532 +0000 UTC m=+0.091629048 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Dec 2 03:38:18 localhost podman[88247]: 2025-12-02 08:38:18.494741426 +0000 UTC m=+0.134732003 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 2 03:38:18 localhost podman[88247]: 2025-12-02 08:38:18.514296378 +0000 UTC m=+0.154286955 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:38:18 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:38:18 localhost podman[88246]: 2025-12-02 08:38:18.565258041 +0000 UTC m=+0.208371497 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:38:18 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:38:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:38:26 localhost podman[88293]: 2025-12-02 08:38:26.42382113 +0000 UTC m=+0.068325310 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Dec 2 03:38:26 localhost podman[88293]: 2025-12-02 08:38:26.43133913 +0000 UTC m=+0.075843280 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-collectd) Dec 2 03:38:26 localhost systemd[1]: tmp-crun.522NAq.mount: Deactivated successfully. Dec 2 03:38:26 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:38:26 localhost podman[88294]: 2025-12-02 08:38:26.449278511 +0000 UTC m=+0.087907604 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, architecture=x86_64, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:38:26 localhost podman[88294]: 2025-12-02 08:38:26.483704588 +0000 UTC m=+0.122333671 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, architecture=x86_64, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:38:26 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:38:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:38:36 localhost systemd[1]: tmp-crun.SXZVSp.mount: Deactivated successfully. Dec 2 03:38:36 localhost podman[88333]: 2025-12-02 08:38:36.461870331 +0000 UTC m=+0.100985093 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd) Dec 2 03:38:36 localhost podman[88333]: 2025-12-02 08:38:36.698052418 +0000 UTC m=+0.337167180 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:38:36 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:38:45 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:38:45 localhost recover_tripleo_nova_virtqemud[88393]: 62312 Dec 2 03:38:45 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:38:45 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:38:45 localhost systemd[1]: tmp-crun.7GHLol.mount: Deactivated successfully. Dec 2 03:38:45 localhost podman[88364]: 2025-12-02 08:38:45.473194234 +0000 UTC m=+0.094857429 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 2 03:38:45 localhost podman[88364]: 2025-12-02 08:38:45.493416503 +0000 UTC m=+0.115079708 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute) Dec 2 03:38:45 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:38:45 localhost systemd[1]: tmp-crun.muiped.mount: Deactivated successfully. Dec 2 03:38:45 localhost podman[88371]: 2025-12-02 08:38:45.588565059 +0000 UTC m=+0.201881514 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:38:45 localhost podman[88363]: 2025-12-02 08:38:45.63071136 +0000 UTC m=+0.249656447 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public) Dec 2 03:38:45 localhost podman[88362]: 2025-12-02 08:38:45.678457742 +0000 UTC m=+0.304935698 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 03:38:45 localhost podman[88362]: 2025-12-02 08:38:45.687851699 +0000 UTC m=+0.314329645 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:38:45 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:38:45 localhost podman[88365]: 2025-12-02 08:38:45.733331994 +0000 UTC m=+0.348479404 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true) Dec 2 03:38:45 localhost podman[88371]: 2025-12-02 08:38:45.760119959 +0000 UTC m=+0.373436414 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z) Dec 2 03:38:45 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:38:45 localhost podman[88365]: 2025-12-02 08:38:45.813355809 +0000 UTC m=+0.428503219 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, url=https://www.redhat.com) Dec 2 03:38:45 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:38:45 localhost podman[88363]: 2025-12-02 08:38:45.989959745 +0000 UTC m=+0.608904802 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Dec 2 03:38:45 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:38:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:38:49 localhost systemd[1]: tmp-crun.HlW69U.mount: Deactivated successfully. Dec 2 03:38:49 localhost podman[88479]: 2025-12-02 08:38:49.451876303 +0000 UTC m=+0.085293058 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, container_name=ovn_metadata_agent, release=1761123044, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 03:38:49 localhost podman[88479]: 2025-12-02 08:38:49.503420051 +0000 UTC m=+0.136836776 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible) Dec 2 03:38:49 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:38:49 localhost podman[88480]: 2025-12-02 08:38:49.509211447 +0000 UTC m=+0.142070138 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:38:49 localhost podman[88480]: 2025-12-02 08:38:49.592123714 +0000 UTC m=+0.224982325 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vcs-type=git, build-date=2025-11-18T23:34:05Z, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:38:49 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:38:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:38:57 localhost podman[88572]: 2025-12-02 08:38:57.458292226 +0000 UTC m=+0.098418689 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_id=tripleo_step3, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 2 03:38:57 localhost podman[88572]: 2025-12-02 08:38:57.467263283 +0000 UTC m=+0.107389726 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:38:57 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:38:57 localhost podman[88573]: 2025-12-02 08:38:57.540297701 +0000 UTC m=+0.177422668 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, tcib_managed=true) Dec 2 03:38:57 localhost podman[88573]: 2025-12-02 08:38:57.54822794 +0000 UTC m=+0.185352897 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:38:57 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:39:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:39:07 localhost podman[88611]: 2025-12-02 08:39:07.438717018 +0000 UTC m=+0.078208430 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z) Dec 2 03:39:07 localhost podman[88611]: 2025-12-02 08:39:07.634071016 +0000 UTC m=+0.273562468 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:39:07 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:39:16 localhost systemd[1]: tmp-crun.60I4Ae.mount: Deactivated successfully. Dec 2 03:39:16 localhost podman[88719]: 2025-12-02 08:39:16.458561586 +0000 UTC m=+0.094101950 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:39:16 localhost podman[88720]: 2025-12-02 08:39:16.519923261 +0000 UTC m=+0.150500780 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, release=1761123044, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:39:16 localhost podman[88717]: 2025-12-02 08:39:16.564435582 +0000 UTC m=+0.202694565 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 2 03:39:16 localhost podman[88718]: 2025-12-02 08:39:16.57352367 +0000 UTC m=+0.211885386 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:39:16 localhost podman[88720]: 2025-12-02 08:39:16.579957022 +0000 UTC m=+0.210534471 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:39:16 localhost podman[88719]: 2025-12-02 08:39:16.587442241 +0000 UTC m=+0.222982635 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 2 03:39:16 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:39:16 localhost podman[88717]: 2025-12-02 08:39:16.60092035 +0000 UTC m=+0.239179283 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:39:16 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:39:16 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:39:16 localhost podman[88728]: 2025-12-02 08:39:16.658763656 +0000 UTC m=+0.285947980 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:39:16 localhost podman[88728]: 2025-12-02 08:39:16.6879126 +0000 UTC m=+0.315096944 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:39:16 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:39:16 localhost podman[88718]: 2025-12-02 08:39:16.951947588 +0000 UTC m=+0.590309304 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=) Dec 2 03:39:16 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:39:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:39:20 localhost podman[88840]: 2025-12-02 08:39:20.436194368 +0000 UTC m=+0.070371822 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git) Dec 2 03:39:20 localhost podman[88840]: 2025-12-02 08:39:20.484205507 +0000 UTC m=+0.118382951 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:39:20 localhost podman[88839]: 2025-12-02 08:39:20.486603498 +0000 UTC m=+0.123774347 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:39:20 localhost podman[88839]: 2025-12-02 08:39:20.529143528 +0000 UTC m=+0.166314357 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true) Dec 2 03:39:20 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:39:20 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:39:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:39:28 localhost podman[88886]: 2025-12-02 08:39:28.490444047 +0000 UTC m=+0.132234560 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:39:28 localhost podman[88886]: 2025-12-02 08:39:28.503348521 +0000 UTC m=+0.145139034 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd) Dec 2 03:39:28 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:39:28 localhost podman[88887]: 2025-12-02 08:39:28.464151794 +0000 UTC m=+0.102145722 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 2 03:39:28 localhost podman[88887]: 2025-12-02 08:39:28.545125623 +0000 UTC m=+0.183119511 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, container_name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Dec 2 03:39:28 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:39:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:39:38 localhost systemd[1]: tmp-crun.9R5Ec8.mount: Deactivated successfully. Dec 2 03:39:38 localhost podman[88926]: 2025-12-02 08:39:38.444712328 +0000 UTC m=+0.085120064 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.41.4, architecture=x86_64, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 2 03:39:38 localhost podman[88926]: 2025-12-02 08:39:38.637136463 +0000 UTC m=+0.277544249 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:39:38 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:39:47 localhost systemd[1]: tmp-crun.jVgdzt.mount: Deactivated successfully. Dec 2 03:39:47 localhost podman[88956]: 2025-12-02 08:39:47.526128883 +0000 UTC m=+0.157087476 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 2 03:39:47 localhost podman[88964]: 2025-12-02 08:39:47.575630689 +0000 UTC m=+0.195724578 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 2 03:39:47 localhost podman[88957]: 2025-12-02 08:39:47.477353265 +0000 UTC m=+0.107532449 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc.) Dec 2 03:39:47 localhost podman[88964]: 2025-12-02 08:39:47.606317682 +0000 UTC m=+0.226411591 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:39:47 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:39:47 localhost podman[88958]: 2025-12-02 08:39:47.632504851 +0000 UTC m=+0.258304644 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:39:47 localhost podman[88957]: 2025-12-02 08:39:47.664339312 +0000 UTC m=+0.294518476 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:39:47 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:39:47 localhost podman[88958]: 2025-12-02 08:39:47.708094864 +0000 UTC m=+0.333894737 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Dec 2 03:39:47 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:39:47 localhost podman[88955]: 2025-12-02 08:39:47.709837769 +0000 UTC m=+0.343378917 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20251118.1, release=1761123044) Dec 2 03:39:47 localhost podman[88955]: 2025-12-02 08:39:47.792047158 +0000 UTC m=+0.425588246 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond) Dec 2 03:39:47 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:39:47 localhost podman[88956]: 2025-12-02 08:39:47.926229266 +0000 UTC m=+0.557187789 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:39:47 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:39:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:39:51 localhost systemd[1]: tmp-crun.S1y3to.mount: Deactivated successfully. Dec 2 03:39:51 localhost podman[89079]: 2025-12-02 08:39:51.459309625 +0000 UTC m=+0.098397528 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:39:51 localhost systemd[1]: tmp-crun.HXR0ia.mount: Deactivated successfully. Dec 2 03:39:51 localhost podman[89080]: 2025-12-02 08:39:51.517984673 +0000 UTC m=+0.144520030 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:39:51 localhost podman[89079]: 2025-12-02 08:39:51.523075321 +0000 UTC m=+0.162163164 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:39:51 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:39:51 localhost podman[89080]: 2025-12-02 08:39:51.544024888 +0000 UTC m=+0.170560245 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:39:51 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:39:56 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:39:56 localhost recover_tripleo_nova_virtqemud[89149]: 62312 Dec 2 03:39:56 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:39:56 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:39:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:39:59 localhost systemd[1]: tmp-crun.YGs3Me.mount: Deactivated successfully. Dec 2 03:39:59 localhost podman[89151]: 2025-12-02 08:39:59.449326374 +0000 UTC m=+0.091086104 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:39:59 localhost podman[89150]: 2025-12-02 08:39:59.491706481 +0000 UTC m=+0.135539123 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:39:59 localhost podman[89150]: 2025-12-02 08:39:59.500275766 +0000 UTC m=+0.144108438 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:39:59 localhost podman[89151]: 2025-12-02 08:39:59.51195831 +0000 UTC m=+0.153718090 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Dec 2 03:39:59 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:39:59 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:40:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:40:09 localhost podman[89188]: 2025-12-02 08:40:09.447279084 +0000 UTC m=+0.090349895 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:40:09 localhost podman[89188]: 2025-12-02 08:40:09.68187012 +0000 UTC m=+0.324940932 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:40:09 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:40:13 localhost podman[89321]: 2025-12-02 08:40:13.746909093 +0000 UTC m=+0.068445785 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.41.4, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, ceph=True, com.redhat.component=rhceph-container) Dec 2 03:40:13 localhost podman[89321]: 2025-12-02 08:40:13.825947272 +0000 UTC m=+0.147483934 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:40:18 localhost podman[89468]: 2025-12-02 08:40:18.465903299 +0000 UTC m=+0.097203427 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true) Dec 2 03:40:18 localhost podman[89468]: 2025-12-02 08:40:18.47585581 +0000 UTC m=+0.107155938 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:40:18 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:40:18 localhost podman[89469]: 2025-12-02 08:40:18.517347435 +0000 UTC m=+0.147200988 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:40:18 localhost podman[89482]: 2025-12-02 08:40:18.564379959 +0000 UTC m=+0.183855450 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z) Dec 2 03:40:18 localhost podman[89470]: 2025-12-02 08:40:18.617911037 +0000 UTC m=+0.242749903 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:40:18 localhost podman[89470]: 2025-12-02 08:40:18.63634312 +0000 UTC m=+0.261182006 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:40:18 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:40:18 localhost podman[89482]: 2025-12-02 08:40:18.690975946 +0000 UTC m=+0.310451447 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:12:45Z) Dec 2 03:40:18 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:40:18 localhost podman[89471]: 2025-12-02 08:40:18.784972272 +0000 UTC m=+0.407320755 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute) Dec 2 03:40:18 localhost podman[89471]: 2025-12-02 08:40:18.816951898 +0000 UTC m=+0.439300421 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_compute, vcs-type=git, config_id=tripleo_step4) Dec 2 03:40:18 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:40:18 localhost podman[89469]: 2025-12-02 08:40:18.883175374 +0000 UTC m=+0.513028957 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:40:18 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:40:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:40:22 localhost podman[89590]: 2025-12-02 08:40:22.441678095 +0000 UTC m=+0.078769594 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:40:22 localhost podman[89590]: 2025-12-02 08:40:22.466410077 +0000 UTC m=+0.103501606 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 03:40:22 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:40:22 localhost systemd[1]: tmp-crun.hLa1rc.mount: Deactivated successfully. Dec 2 03:40:22 localhost podman[89589]: 2025-12-02 08:40:22.549463528 +0000 UTC m=+0.189569934 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:40:22 localhost podman[89589]: 2025-12-02 08:40:22.593663171 +0000 UTC m=+0.233769587 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64) Dec 2 03:40:22 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:40:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:40:30 localhost systemd[1]: tmp-crun.dlUBdX.mount: Deactivated successfully. Dec 2 03:40:30 localhost podman[89636]: 2025-12-02 08:40:30.468186033 +0000 UTC m=+0.102273986 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=iscsid, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:40:30 localhost systemd[1]: tmp-crun.u3fNCp.mount: Deactivated successfully. Dec 2 03:40:30 localhost podman[89635]: 2025-12-02 08:40:30.511744609 +0000 UTC m=+0.149980206 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:40:30 localhost podman[89635]: 2025-12-02 08:40:30.527025215 +0000 UTC m=+0.165260822 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:40:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:40:30 localhost podman[89636]: 2025-12-02 08:40:30.55307692 +0000 UTC m=+0.187164873 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Dec 2 03:40:30 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:40:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:40:40 localhost systemd[1]: tmp-crun.QQXt3W.mount: Deactivated successfully. Dec 2 03:40:40 localhost podman[89675]: 2025-12-02 08:40:40.419754967 +0000 UTC m=+0.069014908 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:40:40 localhost podman[89675]: 2025-12-02 08:40:40.622391599 +0000 UTC m=+0.271651480 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:40:40 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:40:49 localhost podman[89705]: 2025-12-02 08:40:49.446946169 +0000 UTC m=+0.083454112 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute) Dec 2 03:40:49 localhost podman[89707]: 2025-12-02 08:40:49.466198014 +0000 UTC m=+0.093243290 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12) Dec 2 03:40:49 localhost podman[89705]: 2025-12-02 08:40:49.478943994 +0000 UTC m=+0.115451967 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute) Dec 2 03:40:49 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:40:49 localhost podman[89704]: 2025-12-02 08:40:49.545344166 +0000 UTC m=+0.180489996 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:40:49 localhost podman[89722]: 2025-12-02 08:40:49.574255494 +0000 UTC m=+0.192221631 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 2 03:40:49 localhost podman[89703]: 2025-12-02 08:40:49.523725371 +0000 UTC m=+0.161197208 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:40:49 localhost podman[89707]: 2025-12-02 08:40:49.600397331 +0000 UTC m=+0.227442627 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:40:49 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:40:49 localhost podman[89703]: 2025-12-02 08:40:49.656903095 +0000 UTC m=+0.294374842 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond) Dec 2 03:40:49 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:40:49 localhost podman[89722]: 2025-12-02 08:40:49.677715078 +0000 UTC m=+0.295681175 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:40:49 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:40:49 localhost podman[89704]: 2025-12-02 08:40:49.903968495 +0000 UTC m=+0.539114335 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:40:49 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:40:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:40:53 localhost podman[89823]: 2025-12-02 08:40:53.432633572 +0000 UTC m=+0.079059221 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:40:53 localhost systemd[1]: tmp-crun.xeDSHm.mount: Deactivated successfully. Dec 2 03:40:53 localhost podman[89824]: 2025-12-02 08:40:53.489782311 +0000 UTC m=+0.131330307 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 03:40:53 localhost podman[89823]: 2025-12-02 08:40:53.513222071 +0000 UTC m=+0.159647700 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1) Dec 2 03:40:53 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:40:53 localhost podman[89824]: 2025-12-02 08:40:53.567058946 +0000 UTC m=+0.208607002 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:40:53 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:41:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:41:01 localhost podman[89896]: 2025-12-02 08:41:01.428125231 +0000 UTC m=+0.069042280 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:41:01 localhost podman[89896]: 2025-12-02 08:41:01.437065666 +0000 UTC m=+0.077982655 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z) Dec 2 03:41:01 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:41:01 localhost systemd[1]: tmp-crun.ilDDyu.mount: Deactivated successfully. Dec 2 03:41:01 localhost podman[89895]: 2025-12-02 08:41:01.479867283 +0000 UTC m=+0.120388302 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step3) Dec 2 03:41:01 localhost podman[89895]: 2025-12-02 08:41:01.515994723 +0000 UTC m=+0.156515682 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 2 03:41:01 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:41:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:41:11 localhost systemd[1]: tmp-crun.LyOxQA.mount: Deactivated successfully. Dec 2 03:41:11 localhost podman[89935]: 2025-12-02 08:41:11.42908054 +0000 UTC m=+0.076267162 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, version=17.1.12) Dec 2 03:41:11 localhost podman[89935]: 2025-12-02 08:41:11.64079673 +0000 UTC m=+0.287983292 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr) Dec 2 03:41:11 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:41:20 localhost podman[90042]: 2025-12-02 08:41:20.434833701 +0000 UTC m=+0.069206683 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:41:20 localhost podman[90042]: 2025-12-02 08:41:20.466406337 +0000 UTC m=+0.100779369 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 2 03:41:20 localhost systemd[1]: tmp-crun.3IlJQ9.mount: Deactivated successfully. Dec 2 03:41:20 localhost podman[90044]: 2025-12-02 08:41:20.508896996 +0000 UTC m=+0.136178550 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12) Dec 2 03:41:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:41:20 localhost podman[90044]: 2025-12-02 08:41:20.563970212 +0000 UTC m=+0.191251706 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:41:20 localhost podman[90043]: 2025-12-02 08:41:20.519890933 +0000 UTC m=+0.148489139 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:41:20 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:41:20 localhost podman[90043]: 2025-12-02 08:41:20.603287902 +0000 UTC m=+0.231886098 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:41:20 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:41:20 localhost podman[90041]: 2025-12-02 08:41:20.654787359 +0000 UTC m=+0.288038753 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:41:20 localhost podman[90040]: 2025-12-02 08:41:20.710701817 +0000 UTC m=+0.342332590 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12) Dec 2 03:41:20 localhost podman[90040]: 2025-12-02 08:41:20.748078137 +0000 UTC m=+0.379708910 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:41:20 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:41:21 localhost podman[90041]: 2025-12-02 08:41:21.033092033 +0000 UTC m=+0.666343487 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:41:21 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:41:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:41:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:41:24 localhost systemd[1]: tmp-crun.s2HKr9.mount: Deactivated successfully. Dec 2 03:41:24 localhost podman[90163]: 2025-12-02 08:41:24.437869911 +0000 UTC m=+0.079139583 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:34:05Z) Dec 2 03:41:24 localhost podman[90163]: 2025-12-02 08:41:24.489150963 +0000 UTC m=+0.130420645 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:41:24 localhost systemd[1]: tmp-crun.yXvBOg.mount: Deactivated successfully. Dec 2 03:41:24 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:41:24 localhost podman[90162]: 2025-12-02 08:41:24.49066313 +0000 UTC m=+0.131604004 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:41:24 localhost podman[90162]: 2025-12-02 08:41:24.571064775 +0000 UTC m=+0.212005659 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:41:24 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:41:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:41:26 localhost recover_tripleo_nova_virtqemud[90209]: 62312 Dec 2 03:41:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:41:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:41:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:41:32 localhost podman[90211]: 2025-12-02 08:41:32.442977281 +0000 UTC m=+0.082241952 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=) Dec 2 03:41:32 localhost podman[90210]: 2025-12-02 08:41:32.496687153 +0000 UTC m=+0.137630486 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:41:32 localhost podman[90210]: 2025-12-02 08:41:32.508137071 +0000 UTC m=+0.149080464 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vcs-type=git, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:41:32 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:41:32 localhost podman[90211]: 2025-12-02 08:41:32.531168671 +0000 UTC m=+0.170433332 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid) Dec 2 03:41:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:41:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:41:42 localhost podman[90249]: 2025-12-02 08:41:42.441463508 +0000 UTC m=+0.083389691 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible) Dec 2 03:41:42 localhost podman[90249]: 2025-12-02 08:41:42.63816511 +0000 UTC m=+0.280091253 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1) Dec 2 03:41:42 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:41:51 localhost podman[90278]: 2025-12-02 08:41:51.465599201 +0000 UTC m=+0.098081640 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.) Dec 2 03:41:51 localhost systemd[1]: tmp-crun.ux37CB.mount: Deactivated successfully. Dec 2 03:41:51 localhost podman[90278]: 2025-12-02 08:41:51.503034654 +0000 UTC m=+0.135517073 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:41:51 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:41:51 localhost podman[90279]: 2025-12-02 08:41:51.522561066 +0000 UTC m=+0.151046664 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z) Dec 2 03:41:51 localhost podman[90284]: 2025-12-02 08:41:51.57480022 +0000 UTC m=+0.194835125 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:41:51 localhost podman[90284]: 2025-12-02 08:41:51.61092346 +0000 UTC m=+0.230958445 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:41:51 localhost podman[90280]: 2025-12-02 08:41:51.621290611 +0000 UTC m=+0.245921742 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:41:51 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:41:51 localhost podman[90291]: 2025-12-02 08:41:51.49416206 +0000 UTC m=+0.110676067 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12) Dec 2 03:41:51 localhost podman[90280]: 2025-12-02 08:41:51.659018191 +0000 UTC m=+0.283649332 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute) Dec 2 03:41:51 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:41:51 localhost podman[90291]: 2025-12-02 08:41:51.731500636 +0000 UTC m=+0.348014633 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true) Dec 2 03:41:51 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:41:51 localhost podman[90279]: 2025-12-02 08:41:51.894987992 +0000 UTC m=+0.523473530 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=nova_migration_target, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public) Dec 2 03:41:51 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:41:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:41:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:41:55 localhost systemd[1]: tmp-crun.MUxLT0.mount: Deactivated successfully. Dec 2 03:41:55 localhost podman[90397]: 2025-12-02 08:41:55.436818833 +0000 UTC m=+0.079435441 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:41:55 localhost podman[90398]: 2025-12-02 08:41:55.414694576 +0000 UTC m=+0.056944964 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z) Dec 2 03:41:55 localhost podman[90397]: 2025-12-02 08:41:55.479175149 +0000 UTC m=+0.121791807 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc.) Dec 2 03:41:55 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:41:55 localhost podman[90398]: 2025-12-02 08:41:55.499434719 +0000 UTC m=+0.141685097 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:41:55 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:42:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:42:03 localhost podman[90468]: 2025-12-02 08:42:03.452108129 +0000 UTC m=+0.091406084 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-collectd-container) Dec 2 03:42:03 localhost podman[90469]: 2025-12-02 08:42:03.500410506 +0000 UTC m=+0.136298338 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, version=17.1.12, architecture=x86_64, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Dec 2 03:42:03 localhost podman[90468]: 2025-12-02 08:42:03.518161687 +0000 UTC m=+0.157459582 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Dec 2 03:42:03 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:42:03 localhost podman[90469]: 2025-12-02 08:42:03.540022317 +0000 UTC m=+0.175910099 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, container_name=iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid) Dec 2 03:42:03 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:42:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:42:13 localhost podman[90507]: 2025-12-02 08:42:13.432884483 +0000 UTC m=+0.078289729 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:42:13 localhost podman[90507]: 2025-12-02 08:42:13.644555319 +0000 UTC m=+0.289960565 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:42:13 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:42:22 localhost systemd[1]: tmp-crun.m7Hn4B.mount: Deactivated successfully. Dec 2 03:42:22 localhost podman[90614]: 2025-12-02 08:42:22.507139634 +0000 UTC m=+0.141057927 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:42:22 localhost podman[90616]: 2025-12-02 08:42:22.566687074 +0000 UTC m=+0.195317514 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:42:22 localhost podman[90617]: 2025-12-02 08:42:22.616258356 +0000 UTC m=+0.240845677 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:42:22 localhost podman[90616]: 2025-12-02 08:42:22.622984507 +0000 UTC m=+0.251614947 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:42:22 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:42:22 localhost podman[90617]: 2025-12-02 08:42:22.658961621 +0000 UTC m=+0.283548932 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:42:22 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:42:22 localhost podman[90615]: 2025-12-02 08:42:22.671436459 +0000 UTC m=+0.304310574 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:42:22 localhost podman[90613]: 2025-12-02 08:42:22.483672939 +0000 UTC m=+0.118074545 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:42:22 localhost podman[90615]: 2025-12-02 08:42:22.700927656 +0000 UTC m=+0.333801751 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, container_name=nova_compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:42:22 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:42:22 localhost podman[90613]: 2025-12-02 08:42:22.719050977 +0000 UTC m=+0.353452573 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:42:22 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:42:22 localhost podman[90614]: 2025-12-02 08:42:22.864026328 +0000 UTC m=+0.497944631 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 2 03:42:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:42:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:42:26 localhost podman[90736]: 2025-12-02 08:42:26.450709096 +0000 UTC m=+0.089672786 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:42:26 localhost podman[90736]: 2025-12-02 08:42:26.499514016 +0000 UTC m=+0.138477706 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 2 03:42:26 localhost podman[90737]: 2025-12-02 08:42:26.502333253 +0000 UTC m=+0.137759108 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1) Dec 2 03:42:26 localhost podman[90737]: 2025-12-02 08:42:26.526041104 +0000 UTC m=+0.161466989 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 2 03:42:26 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:42:26 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:42:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:42:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:42:34 localhost podman[90785]: 2025-12-02 08:42:34.45572224 +0000 UTC m=+0.091213778 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:42:34 localhost podman[90785]: 2025-12-02 08:42:34.46386198 +0000 UTC m=+0.099353538 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z) Dec 2 03:42:34 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:42:34 localhost podman[90786]: 2025-12-02 08:42:34.503883373 +0000 UTC m=+0.135781954 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:42:34 localhost podman[90786]: 2025-12-02 08:42:34.517266895 +0000 UTC m=+0.149165526 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12) Dec 2 03:42:34 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:42:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:42:44 localhost systemd[1]: tmp-crun.gnxLQH.mount: Deactivated successfully. Dec 2 03:42:44 localhost podman[90824]: 2025-12-02 08:42:44.442171806 +0000 UTC m=+0.086912133 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:42:44 localhost podman[90824]: 2025-12-02 08:42:44.612633137 +0000 UTC m=+0.257373374 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:42:44 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:42:53 localhost systemd[1]: tmp-crun.N8SSM3.mount: Deactivated successfully. Dec 2 03:42:53 localhost podman[90855]: 2025-12-02 08:42:53.451130518 +0000 UTC m=+0.084284531 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Dec 2 03:42:53 localhost podman[90854]: 2025-12-02 08:42:53.506285381 +0000 UTC m=+0.138862448 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-type=git, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:42:53 localhost podman[90859]: 2025-12-02 08:42:53.564213767 +0000 UTC m=+0.191307036 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 2 03:42:53 localhost podman[90859]: 2025-12-02 08:42:53.587642671 +0000 UTC m=+0.214735840 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:42:53 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:42:53 localhost podman[90853]: 2025-12-02 08:42:53.605930696 +0000 UTC m=+0.242441879 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:42:53 localhost podman[90853]: 2025-12-02 08:42:53.614846557 +0000 UTC m=+0.251357740 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:42:53 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:42:53 localhost podman[90855]: 2025-12-02 08:42:53.630980753 +0000 UTC m=+0.264134756 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team) Dec 2 03:42:53 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:42:53 localhost podman[90866]: 2025-12-02 08:42:53.479605158 +0000 UTC m=+0.103130621 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z) Dec 2 03:42:53 localhost podman[90866]: 2025-12-02 08:42:53.713094125 +0000 UTC m=+0.336619588 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 2 03:42:53 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:42:53 localhost podman[90854]: 2025-12-02 08:42:53.898182062 +0000 UTC m=+0.530759229 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 03:42:53 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:42:54 localhost systemd[1]: tmp-crun.9gVOb6.mount: Deactivated successfully. Dec 2 03:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:42:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:42:57 localhost podman[90995]: 2025-12-02 08:42:57.451818876 +0000 UTC m=+0.087034825 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com) Dec 2 03:42:57 localhost systemd[1]: tmp-crun.dl2QoQ.mount: Deactivated successfully. Dec 2 03:42:57 localhost podman[90995]: 2025-12-02 08:42:57.499332791 +0000 UTC m=+0.134548690 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12) Dec 2 03:42:57 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:42:57 localhost podman[90994]: 2025-12-02 08:42:57.503595407 +0000 UTC m=+0.141431197 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:42:57 localhost podman[90994]: 2025-12-02 08:42:57.589170612 +0000 UTC m=+0.227006382 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:42:57 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:43:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:43:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:43:05 localhost recover_tripleo_nova_virtqemud[91055]: 62312 Dec 2 03:43:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:43:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:43:05 localhost podman[91043]: 2025-12-02 08:43:05.443773748 +0000 UTC m=+0.085623638 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:43:05 localhost podman[91043]: 2025-12-02 08:43:05.483123352 +0000 UTC m=+0.124973192 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc.) Dec 2 03:43:05 localhost podman[91042]: 2025-12-02 08:43:05.49042622 +0000 UTC m=+0.130728777 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, release=1761123044, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:43:05 localhost podman[91042]: 2025-12-02 08:43:05.499991349 +0000 UTC m=+0.140293896 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3) Dec 2 03:43:05 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:43:05 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:43:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:43:15 localhost systemd[1]: tmp-crun.XQHn80.mount: Deactivated successfully. Dec 2 03:43:15 localhost podman[91083]: 2025-12-02 08:43:15.476520157 +0000 UTC m=+0.120633125 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr) Dec 2 03:43:15 localhost podman[91083]: 2025-12-02 08:43:15.700548577 +0000 UTC m=+0.344661545 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:46Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., config_id=tripleo_step1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:43:15 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:43:24 localhost systemd[1]: tmp-crun.gVxmFR.mount: Deactivated successfully. Dec 2 03:43:24 localhost systemd[1]: tmp-crun.ZjE8mv.mount: Deactivated successfully. Dec 2 03:43:24 localhost podman[91203]: 2025-12-02 08:43:24.491316608 +0000 UTC m=+0.109810181 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:43:24 localhost podman[91190]: 2025-12-02 08:43:24.508576225 +0000 UTC m=+0.141841477 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 03:43:24 localhost podman[91203]: 2025-12-02 08:43:24.520979631 +0000 UTC m=+0.139473214 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:43:24 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:43:24 localhost podman[91197]: 2025-12-02 08:43:24.570683236 +0000 UTC m=+0.194970736 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:43:24 localhost podman[91191]: 2025-12-02 08:43:24.474703449 +0000 UTC m=+0.103438219 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 2 03:43:24 localhost podman[91191]: 2025-12-02 08:43:24.606056153 +0000 UTC m=+0.234790883 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 2 03:43:24 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:43:24 localhost podman[91197]: 2025-12-02 08:43:24.629083075 +0000 UTC m=+0.253370585 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public) Dec 2 03:43:24 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:43:24 localhost podman[91189]: 2025-12-02 08:43:24.680366453 +0000 UTC m=+0.315252159 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1761123044, container_name=logrotate_crond, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:43:24 localhost podman[91189]: 2025-12-02 08:43:24.692954363 +0000 UTC m=+0.327840099 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:43:24 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:43:24 localhost podman[91190]: 2025-12-02 08:43:24.870460546 +0000 UTC m=+0.503725848 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target) Dec 2 03:43:24 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:43:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:43:28 localhost podman[91307]: 2025-12-02 08:43:28.447331418 +0000 UTC m=+0.089268955 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4) Dec 2 03:43:28 localhost podman[91307]: 2025-12-02 08:43:28.486008705 +0000 UTC m=+0.127946282 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:43:28 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:43:28 localhost podman[91308]: 2025-12-02 08:43:28.507061855 +0000 UTC m=+0.143657587 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:43:28 localhost podman[91308]: 2025-12-02 08:43:28.560088339 +0000 UTC m=+0.196684051 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:43:28 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:43:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:43:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:43:36 localhost systemd[1]: tmp-crun.TyTrZW.mount: Deactivated successfully. Dec 2 03:43:36 localhost podman[91354]: 2025-12-02 08:43:36.444410648 +0000 UTC m=+0.085109273 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:43:36 localhost podman[91354]: 2025-12-02 08:43:36.455243571 +0000 UTC m=+0.095942216 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:43:36 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:43:36 localhost systemd[1]: tmp-crun.nT9o1B.mount: Deactivated successfully. Dec 2 03:43:36 localhost podman[91353]: 2025-12-02 08:43:36.563396097 +0000 UTC m=+0.203155097 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., container_name=collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Dec 2 03:43:36 localhost podman[91353]: 2025-12-02 08:43:36.602054463 +0000 UTC m=+0.241813433 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 2 03:43:36 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:43:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:43:46 localhost podman[91393]: 2025-12-02 08:43:46.44435111 +0000 UTC m=+0.087893420 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:43:46 localhost podman[91393]: 2025-12-02 08:43:46.640881196 +0000 UTC m=+0.284423486 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Dec 2 03:43:46 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:43:55 localhost systemd[1]: tmp-crun.P3D6Ew.mount: Deactivated successfully. Dec 2 03:43:55 localhost systemd[1]: tmp-crun.2L0PG6.mount: Deactivated successfully. Dec 2 03:43:55 localhost podman[91426]: 2025-12-02 08:43:55.472715948 +0000 UTC m=+0.100691335 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:43:55 localhost podman[91426]: 2025-12-02 08:43:55.501951949 +0000 UTC m=+0.129927386 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:43:55 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:43:55 localhost podman[91425]: 2025-12-02 08:43:55.51825989 +0000 UTC m=+0.149246888 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 2 03:43:55 localhost podman[91425]: 2025-12-02 08:43:55.548387935 +0000 UTC m=+0.179374883 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:43:55 localhost podman[91424]: 2025-12-02 08:43:55.447778803 +0000 UTC m=+0.086332796 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Dec 2 03:43:55 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:43:55 localhost podman[91423]: 2025-12-02 08:43:55.568223932 +0000 UTC m=+0.205829870 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:43:55 localhost podman[91438]: 2025-12-02 08:43:55.622474039 +0000 UTC m=+0.243145349 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:43:55 localhost podman[91438]: 2025-12-02 08:43:55.681009513 +0000 UTC m=+0.301680783 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:43:55 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:43:55 localhost podman[91423]: 2025-12-02 08:43:55.704082557 +0000 UTC m=+0.341688445 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:43:55 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:43:55 localhost podman[91424]: 2025-12-02 08:43:55.832058679 +0000 UTC m=+0.470612742 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target) Dec 2 03:43:55 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:43:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:43:59 localhost podman[91539]: 2025-12-02 08:43:59.44326629 +0000 UTC m=+0.085966977 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:43:59 localhost podman[91540]: 2025-12-02 08:43:59.505435452 +0000 UTC m=+0.141976682 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:43:59 localhost podman[91539]: 2025-12-02 08:43:59.516676976 +0000 UTC m=+0.159377643 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:43:59 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:43:59 localhost podman[91540]: 2025-12-02 08:43:59.55967043 +0000 UTC m=+0.196211600 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:43:59 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:44:06 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:44:06 localhost recover_tripleo_nova_virtqemud[91588]: 62312 Dec 2 03:44:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:44:06 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:44:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:44:07 localhost systemd[1]: tmp-crun.wnDMCr.mount: Deactivated successfully. Dec 2 03:44:07 localhost podman[91590]: 2025-12-02 08:44:07.455634103 +0000 UTC m=+0.090131949 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Dec 2 03:44:07 localhost systemd[1]: tmp-crun.PA4WNY.mount: Deactivated successfully. Dec 2 03:44:07 localhost podman[91589]: 2025-12-02 08:44:07.500688132 +0000 UTC m=+0.137914602 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container) Dec 2 03:44:07 localhost podman[91589]: 2025-12-02 08:44:07.532869213 +0000 UTC m=+0.170095683 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:44:07 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:44:07 localhost podman[91590]: 2025-12-02 08:44:07.552932476 +0000 UTC m=+0.187430282 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, container_name=iscsid) Dec 2 03:44:07 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:44:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:44:17 localhost podman[91629]: 2025-12-02 08:44:17.4650008 +0000 UTC m=+0.089630445 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:44:17 localhost podman[91629]: 2025-12-02 08:44:17.673947633 +0000 UTC m=+0.298577298 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, container_name=metrics_qdr, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:44:17 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:44:26 localhost podman[91738]: 2025-12-02 08:44:26.457429486 +0000 UTC m=+0.084011794 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:44:26 localhost podman[91738]: 2025-12-02 08:44:26.509960027 +0000 UTC m=+0.136542335 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 03:44:26 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:44:26 localhost podman[91737]: 2025-12-02 08:44:26.510789339 +0000 UTC m=+0.136336459 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:44:26 localhost podman[91736]: 2025-12-02 08:44:26.56368356 +0000 UTC m=+0.192434977 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:44:26 localhost podman[91735]: 2025-12-02 08:44:26.607476105 +0000 UTC m=+0.235942154 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:44:26 localhost podman[91735]: 2025-12-02 08:44:26.617927997 +0000 UTC m=+0.246394046 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:44:26 localhost podman[91739]: 2025-12-02 08:44:26.663374778 +0000 UTC m=+0.283662686 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4) Dec 2 03:44:26 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:44:26 localhost podman[91739]: 2025-12-02 08:44:26.688015093 +0000 UTC m=+0.308303041 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 2 03:44:26 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:44:26 localhost podman[91737]: 2025-12-02 08:44:26.699297849 +0000 UTC m=+0.324844989 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 2 03:44:26 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:44:26 localhost podman[91736]: 2025-12-02 08:44:26.947085352 +0000 UTC m=+0.575836739 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:44:26 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:44:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:44:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:44:30 localhost podman[91850]: 2025-12-02 08:44:30.445434101 +0000 UTC m=+0.083533141 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public) Dec 2 03:44:30 localhost podman[91850]: 2025-12-02 08:44:30.494115018 +0000 UTC m=+0.132214028 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 2 03:44:30 localhost podman[91851]: 2025-12-02 08:44:30.507476369 +0000 UTC m=+0.141966352 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4) Dec 2 03:44:30 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:44:30 localhost podman[91851]: 2025-12-02 08:44:30.536056522 +0000 UTC m=+0.170546515 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4) Dec 2 03:44:30 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:44:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:44:38 localhost systemd[1]: tmp-crun.X0aibn.mount: Deactivated successfully. Dec 2 03:44:38 localhost podman[91898]: 2025-12-02 08:44:38.443906159 +0000 UTC m=+0.086230514 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd) Dec 2 03:44:38 localhost podman[91898]: 2025-12-02 08:44:38.449776488 +0000 UTC m=+0.092100853 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:44:38 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:44:38 localhost podman[91899]: 2025-12-02 08:44:38.534211812 +0000 UTC m=+0.176794424 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 2 03:44:38 localhost podman[91899]: 2025-12-02 08:44:38.546951417 +0000 UTC m=+0.189534059 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container) Dec 2 03:44:38 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:44:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:44:48 localhost systemd[1]: tmp-crun.wifcyg.mount: Deactivated successfully. Dec 2 03:44:48 localhost podman[91936]: 2025-12-02 08:44:48.444122128 +0000 UTC m=+0.087304817 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 03:44:48 localhost podman[91936]: 2025-12-02 08:44:48.621765329 +0000 UTC m=+0.264947948 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:44:48 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:44:57 localhost systemd[1]: tmp-crun.cqvwyB.mount: Deactivated successfully. Dec 2 03:44:57 localhost podman[91973]: 2025-12-02 08:44:57.475063103 +0000 UTC m=+0.099922074 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.12) Dec 2 03:44:57 localhost podman[91967]: 2025-12-02 08:44:57.445830549 +0000 UTC m=+0.079032694 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:44:57 localhost podman[91973]: 2025-12-02 08:44:57.505054309 +0000 UTC m=+0.129913270 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Dec 2 03:44:57 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:44:57 localhost podman[91967]: 2025-12-02 08:44:57.527762249 +0000 UTC m=+0.160964424 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, config_id=tripleo_step5) Dec 2 03:44:57 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:44:57 localhost podman[91978]: 2025-12-02 08:44:57.569840579 +0000 UTC m=+0.192781919 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:44:57 localhost podman[91978]: 2025-12-02 08:44:57.596972668 +0000 UTC m=+0.219913928 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:44:57 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:44:57 localhost podman[91965]: 2025-12-02 08:44:57.61679273 +0000 UTC m=+0.256658814 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 03:44:57 localhost podman[91966]: 2025-12-02 08:44:57.571029442 +0000 UTC m=+0.205199544 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container) Dec 2 03:44:57 localhost podman[91965]: 2025-12-02 08:44:57.629017398 +0000 UTC m=+0.268883502 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container) Dec 2 03:44:57 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:44:57 localhost podman[91966]: 2025-12-02 08:44:57.935151712 +0000 UTC m=+0.569321864 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:44:58 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:45:01 localhost systemd[1]: tmp-crun.1weSe6.mount: Deactivated successfully. Dec 2 03:45:01 localhost podman[92086]: 2025-12-02 08:45:01.445534028 +0000 UTC m=+0.082763095 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.expose-services=) Dec 2 03:45:01 localhost podman[92085]: 2025-12-02 08:45:01.499746924 +0000 UTC m=+0.139403736 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:45:01 localhost podman[92086]: 2025-12-02 08:45:01.550829445 +0000 UTC m=+0.188058532 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 2 03:45:01 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:45:01 localhost podman[92085]: 2025-12-02 08:45:01.569929149 +0000 UTC m=+0.209585971 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:45:01 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:45:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:45:09 localhost podman[92135]: 2025-12-02 08:45:09.428916622 +0000 UTC m=+0.070165655 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:45:09 localhost podman[92135]: 2025-12-02 08:45:09.467337524 +0000 UTC m=+0.108586497 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:45:09 localhost systemd[1]: tmp-crun.16VdUl.mount: Deactivated successfully. Dec 2 03:45:09 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:45:09 localhost podman[92134]: 2025-12-02 08:45:09.492679465 +0000 UTC m=+0.136654552 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:45:09 localhost podman[92134]: 2025-12-02 08:45:09.501275996 +0000 UTC m=+0.145251123 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 2 03:45:09 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:45:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:45:19 localhost systemd[1]: tmp-crun.rALoFN.mount: Deactivated successfully. Dec 2 03:45:19 localhost podman[92174]: 2025-12-02 08:45:19.456491651 +0000 UTC m=+0.098197989 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true) Dec 2 03:45:19 localhost podman[92174]: 2025-12-02 08:45:19.658381723 +0000 UTC m=+0.300088081 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc.) Dec 2 03:45:19 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:45:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:45:26 localhost recover_tripleo_nova_virtqemud[92265]: 62312 Dec 2 03:45:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:45:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:45:28 localhost podman[92284]: 2025-12-02 08:45:28.458279764 +0000 UTC m=+0.081743787 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:45:28 localhost systemd[1]: tmp-crun.wt97bW.mount: Deactivated successfully. Dec 2 03:45:28 localhost podman[92283]: 2025-12-02 08:45:28.525997544 +0000 UTC m=+0.151478151 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:45:28 localhost podman[92284]: 2025-12-02 08:45:28.537008638 +0000 UTC m=+0.160472631 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public) Dec 2 03:45:28 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:45:28 localhost podman[92283]: 2025-12-02 08:45:28.556954315 +0000 UTC m=+0.182434992 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:45:28 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:45:28 localhost podman[92290]: 2025-12-02 08:45:28.618931309 +0000 UTC m=+0.241087416 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:45:28 localhost podman[92281]: 2025-12-02 08:45:28.667845903 +0000 UTC m=+0.301929261 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:45:28 localhost podman[92281]: 2025-12-02 08:45:28.675299583 +0000 UTC m=+0.309382941 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:45:28 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:45:28 localhost podman[92282]: 2025-12-02 08:45:28.717579449 +0000 UTC m=+0.346187580 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:45:28 localhost podman[92290]: 2025-12-02 08:45:28.725772259 +0000 UTC m=+0.347928366 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:45:28 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:45:29 localhost podman[92282]: 2025-12-02 08:45:29.02814136 +0000 UTC m=+0.656749551 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:45:29 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:45:32 localhost sshd[92403]: main: sshd: ssh-rsa algorithm is disabled Dec 2 03:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:45:32 localhost podman[92405]: 2025-12-02 08:45:32.453886923 +0000 UTC m=+0.083962746 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:45:32 localhost podman[92404]: 2025-12-02 08:45:32.512201959 +0000 UTC m=+0.142115088 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Dec 2 03:45:32 localhost systemd[1]: tmp-crun.w7UlE6.mount: Deactivated successfully. Dec 2 03:45:32 localhost podman[92405]: 2025-12-02 08:45:32.503070034 +0000 UTC m=+0.133145807 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:45:32 localhost podman[92404]: 2025-12-02 08:45:32.554109514 +0000 UTC m=+0.184022663 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4) Dec 2 03:45:32 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:45:32 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:45:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:45:40 localhost systemd[1]: tmp-crun.dyuYyu.mount: Deactivated successfully. Dec 2 03:45:40 localhost podman[92452]: 2025-12-02 08:45:40.453211649 +0000 UTC m=+0.090077390 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc.) Dec 2 03:45:40 localhost podman[92452]: 2025-12-02 08:45:40.469049434 +0000 UTC m=+0.105915225 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z) Dec 2 03:45:40 localhost podman[92451]: 2025-12-02 08:45:40.519652544 +0000 UTC m=+0.159393283 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1) Dec 2 03:45:40 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:45:40 localhost podman[92451]: 2025-12-02 08:45:40.579883271 +0000 UTC m=+0.219624050 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-collectd, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=collectd) Dec 2 03:45:40 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:45:41 localhost systemd[1]: tmp-crun.hylmaH.mount: Deactivated successfully. Dec 2 03:45:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:45:50 localhost podman[92488]: 2025-12-02 08:45:50.438688533 +0000 UTC m=+0.080819212 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, release=1761123044, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:45:50 localhost podman[92488]: 2025-12-02 08:45:50.640576265 +0000 UTC m=+0.282706914 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, container_name=metrics_qdr) Dec 2 03:45:50 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:45:59 localhost podman[92517]: 2025-12-02 08:45:59.458417883 +0000 UTC m=+0.091536529 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:45:59 localhost systemd[1]: tmp-crun.Y7lH0V.mount: Deactivated successfully. Dec 2 03:45:59 localhost podman[92518]: 2025-12-02 08:45:59.559510789 +0000 UTC m=+0.190483238 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:45:59 localhost podman[92519]: 2025-12-02 08:45:59.53011272 +0000 UTC m=+0.159125236 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 2 03:45:59 localhost podman[92518]: 2025-12-02 08:45:59.586877314 +0000 UTC m=+0.217849763 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., distribution-scope=public) Dec 2 03:45:59 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:45:59 localhost podman[92519]: 2025-12-02 08:45:59.612866452 +0000 UTC m=+0.241878948 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 03:45:59 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:45:59 localhost podman[92516]: 2025-12-02 08:45:59.666991696 +0000 UTC m=+0.303227675 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, config_id=tripleo_step4, release=1761123044) Dec 2 03:45:59 localhost podman[92516]: 2025-12-02 08:45:59.678488875 +0000 UTC m=+0.314724844 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:45:59 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:45:59 localhost podman[92522]: 2025-12-02 08:45:59.721915751 +0000 UTC m=+0.347183047 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:45:59 localhost podman[92522]: 2025-12-02 08:45:59.75202628 +0000 UTC m=+0.377293616 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:45:59 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:45:59 localhost podman[92517]: 2025-12-02 08:45:59.819078161 +0000 UTC m=+0.452196807 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 2 03:45:59 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:46:03 localhost podman[92639]: 2025-12-02 08:46:03.442567755 +0000 UTC m=+0.076689291 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 03:46:03 localhost podman[92639]: 2025-12-02 08:46:03.484157881 +0000 UTC m=+0.118279407 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:46:03 localhost podman[92640]: 2025-12-02 08:46:03.49600722 +0000 UTC m=+0.125879662 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, vcs-type=git) Dec 2 03:46:03 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:46:03 localhost podman[92640]: 2025-12-02 08:46:03.543825745 +0000 UTC m=+0.173698197 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4) Dec 2 03:46:03 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:46:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:46:11 localhost podman[92685]: 2025-12-02 08:46:11.430554217 +0000 UTC m=+0.070859114 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:46:11 localhost podman[92685]: 2025-12-02 08:46:11.438829679 +0000 UTC m=+0.079134596 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, version=17.1.12, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Dec 2 03:46:11 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:46:11 localhost systemd[1]: tmp-crun.cBfkSW.mount: Deactivated successfully. Dec 2 03:46:11 localhost podman[92686]: 2025-12-02 08:46:11.501849961 +0000 UTC m=+0.138370777 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public) Dec 2 03:46:11 localhost podman[92686]: 2025-12-02 08:46:11.538081024 +0000 UTC m=+0.174601850 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 2 03:46:11 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:46:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:46:21 localhost podman[92722]: 2025-12-02 08:46:21.425845122 +0000 UTC m=+0.073026462 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1) Dec 2 03:46:21 localhost podman[92722]: 2025-12-02 08:46:21.614053627 +0000 UTC m=+0.261234967 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true) Dec 2 03:46:21 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:46:30 localhost systemd[1]: tmp-crun.hqydS2.mount: Deactivated successfully. Dec 2 03:46:30 localhost podman[92863]: 2025-12-02 08:46:30.466977269 +0000 UTC m=+0.102708699 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron) Dec 2 03:46:30 localhost systemd[1]: tmp-crun.bE1VCf.mount: Deactivated successfully. Dec 2 03:46:30 localhost podman[92866]: 2025-12-02 08:46:30.506911683 +0000 UTC m=+0.135572793 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:46:30 localhost podman[92867]: 2025-12-02 08:46:30.514705391 +0000 UTC m=+0.140607967 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:46:30 localhost podman[92863]: 2025-12-02 08:46:30.525765108 +0000 UTC m=+0.161496508 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044) Dec 2 03:46:30 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:46:30 localhost podman[92866]: 2025-12-02 08:46:30.537565616 +0000 UTC m=+0.166226796 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:46:30 localhost podman[92867]: 2025-12-02 08:46:30.559432193 +0000 UTC m=+0.185334689 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:46:30 localhost podman[92865]: 2025-12-02 08:46:30.568683902 +0000 UTC m=+0.201848613 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:46:30 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:46:30 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:46:30 localhost podman[92864]: 2025-12-02 08:46:30.602329806 +0000 UTC m=+0.237399748 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Dec 2 03:46:30 localhost podman[92865]: 2025-12-02 08:46:30.618990763 +0000 UTC m=+0.252155424 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, container_name=nova_compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_id=tripleo_step5) Dec 2 03:46:30 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:46:30 localhost podman[92864]: 2025-12-02 08:46:30.976996148 +0000 UTC m=+0.612066140 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4) Dec 2 03:46:30 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:46:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:46:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:46:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:46:34 localhost recover_tripleo_nova_virtqemud[93004]: 62312 Dec 2 03:46:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:46:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:46:34 localhost systemd[1]: tmp-crun.cyK3p6.mount: Deactivated successfully. Dec 2 03:46:34 localhost systemd[1]: tmp-crun.n63NaA.mount: Deactivated successfully. Dec 2 03:46:34 localhost podman[92995]: 2025-12-02 08:46:34.623488989 +0000 UTC m=+0.259424199 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 2 03:46:34 localhost podman[92994]: 2025-12-02 08:46:34.575813189 +0000 UTC m=+0.214636956 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:46:34 localhost podman[92994]: 2025-12-02 08:46:34.70913089 +0000 UTC m=+0.347954637 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true) Dec 2 03:46:34 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:46:34 localhost podman[92995]: 2025-12-02 08:46:34.729506327 +0000 UTC m=+0.365441507 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 03:46:34 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:46:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:46:42 localhost podman[93043]: 2025-12-02 08:46:42.447826585 +0000 UTC m=+0.089248008 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:46:42 localhost podman[93043]: 2025-12-02 08:46:42.458883542 +0000 UTC m=+0.100304935 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3) Dec 2 03:46:42 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:46:42 localhost systemd[1]: tmp-crun.s8H7Hd.mount: Deactivated successfully. Dec 2 03:46:42 localhost podman[93044]: 2025-12-02 08:46:42.552150977 +0000 UTC m=+0.190407025 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-iscsid-container, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:46:42 localhost podman[93044]: 2025-12-02 08:46:42.566994197 +0000 UTC m=+0.205250215 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:46:42 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:46:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:46:52 localhost podman[93081]: 2025-12-02 08:46:52.492859696 +0000 UTC m=+0.132010987 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team) Dec 2 03:46:52 localhost podman[93081]: 2025-12-02 08:46:52.688916372 +0000 UTC m=+0.328067603 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step1) Dec 2 03:46:52 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:47:01 localhost systemd[1]: tmp-crun.czvx4g.mount: Deactivated successfully. Dec 2 03:47:01 localhost podman[93110]: 2025-12-02 08:47:01.457748686 +0000 UTC m=+0.089897826 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:47:01 localhost podman[93110]: 2025-12-02 08:47:01.498978443 +0000 UTC m=+0.131127533 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:47:01 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:47:01 localhost podman[93113]: 2025-12-02 08:47:01.470986102 +0000 UTC m=+0.091386156 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute) Dec 2 03:47:01 localhost podman[93113]: 2025-12-02 08:47:01.550303942 +0000 UTC m=+0.170704016 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64) Dec 2 03:47:01 localhost podman[93112]: 2025-12-02 08:47:01.49884284 +0000 UTC m=+0.124769583 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=) Dec 2 03:47:01 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:47:01 localhost podman[93119]: 2025-12-02 08:47:01.562075088 +0000 UTC m=+0.183812308 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:47:01 localhost podman[93111]: 2025-12-02 08:47:01.610649883 +0000 UTC m=+0.237975683 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:47:01 localhost podman[93112]: 2025-12-02 08:47:01.630444525 +0000 UTC m=+0.256371228 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:47:01 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:47:01 localhost podman[93119]: 2025-12-02 08:47:01.687549538 +0000 UTC m=+0.309286768 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 03:47:01 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:47:02 localhost podman[93111]: 2025-12-02 08:47:02.016980717 +0000 UTC m=+0.644306477 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:47:02 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:47:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:47:05 localhost podman[93232]: 2025-12-02 08:47:05.445868852 +0000 UTC m=+0.082814426 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public) Dec 2 03:47:05 localhost podman[93232]: 2025-12-02 08:47:05.467217855 +0000 UTC m=+0.104163369 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 2 03:47:05 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Deactivated successfully. Dec 2 03:47:05 localhost podman[93231]: 2025-12-02 08:47:05.54446753 +0000 UTC m=+0.184957088 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:47:05 localhost podman[93231]: 2025-12-02 08:47:05.617362988 +0000 UTC m=+0.257852506 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 03:47:05 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:47:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:47:13 localhost systemd[1]: tmp-crun.1urBdZ.mount: Deactivated successfully. Dec 2 03:47:13 localhost podman[93280]: 2025-12-02 08:47:13.431524756 +0000 UTC m=+0.078964171 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, release=1761123044, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:47:13 localhost podman[93280]: 2025-12-02 08:47:13.463781143 +0000 UTC m=+0.111220618 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, distribution-scope=public, container_name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:47:13 localhost systemd[1]: tmp-crun.Kt1rVX.mount: Deactivated successfully. Dec 2 03:47:13 localhost podman[93281]: 2025-12-02 08:47:13.482010972 +0000 UTC m=+0.124603627 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:47:13 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:47:13 localhost podman[93281]: 2025-12-02 08:47:13.49086423 +0000 UTC m=+0.133456875 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:47:13 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:47:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:47:23 localhost podman[93319]: 2025-12-02 08:47:23.433228186 +0000 UTC m=+0.078413737 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1) Dec 2 03:47:23 localhost podman[93319]: 2025-12-02 08:47:23.629831426 +0000 UTC m=+0.275016927 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 2 03:47:23 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:47:32 localhost systemd[1]: tmp-crun.6lJxol.mount: Deactivated successfully. Dec 2 03:47:32 localhost podman[93349]: 2025-12-02 08:47:32.442851775 +0000 UTC m=+0.075849338 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:47:32 localhost podman[93356]: 2025-12-02 08:47:32.50482242 +0000 UTC m=+0.129306864 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z) Dec 2 03:47:32 localhost podman[93348]: 2025-12-02 08:47:32.464765003 +0000 UTC m=+0.099688998 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, distribution-scope=public, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:47:32 localhost podman[93349]: 2025-12-02 08:47:32.52681027 +0000 UTC m=+0.159807833 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step5) Dec 2 03:47:32 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:47:32 localhost podman[93350]: 2025-12-02 08:47:32.492757755 +0000 UTC m=+0.121440282 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, architecture=x86_64, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:47:32 localhost podman[93356]: 2025-12-02 08:47:32.582971199 +0000 UTC m=+0.207455713 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z) Dec 2 03:47:32 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:47:32 localhost podman[93347]: 2025-12-02 08:47:32.595337291 +0000 UTC m=+0.232684991 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 2 03:47:32 localhost podman[93347]: 2025-12-02 08:47:32.606890441 +0000 UTC m=+0.244238131 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container) Dec 2 03:47:32 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:47:32 localhost podman[93350]: 2025-12-02 08:47:32.626052596 +0000 UTC m=+0.254735133 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 2 03:47:32 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:47:32 localhost podman[93348]: 2025-12-02 08:47:32.834911736 +0000 UTC m=+0.469835681 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, distribution-scope=public, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:47:32 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:47:34 localhost podman[93599]: Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.22156929 +0000 UTC m=+0.050934080 container create d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218) Dec 2 03:47:34 localhost systemd[1]: Started libpod-conmon-d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645.scope. Dec 2 03:47:34 localhost systemd[1]: Started libcrun container. Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.299845762 +0000 UTC m=+0.129210542 container init d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, distribution-scope=public, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.201014317 +0000 UTC m=+0.030379127 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.316131399 +0000 UTC m=+0.145496199 container start d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.316413126 +0000 UTC m=+0.145777966 container attach d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7) Dec 2 03:47:34 localhost eager_hopper[93614]: 167 167 Dec 2 03:47:34 localhost systemd[1]: libpod-d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645.scope: Deactivated successfully. Dec 2 03:47:34 localhost podman[93599]: 2025-12-02 08:47:34.31948659 +0000 UTC m=+0.148851430 container died d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218) Dec 2 03:47:34 localhost podman[93621]: 2025-12-02 08:47:34.432840684 +0000 UTC m=+0.101143308 container remove d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_hopper, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 03:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-de2fc7a818f86380606426cf501fef0c61cec998eca7d5052a0b9382390fe4bc-merged.mount: Deactivated successfully. Dec 2 03:47:34 localhost systemd[1]: libpod-conmon-d1ad154b99925e7db1c9dc3e9b884fef57f0e9c016d1300c44ea436e75308645.scope: Deactivated successfully. Dec 2 03:47:34 localhost podman[93641]: Dec 2 03:47:34 localhost podman[93641]: 2025-12-02 08:47:34.66240124 +0000 UTC m=+0.078442368 container create 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 2 03:47:34 localhost systemd[1]: Started libpod-conmon-21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034.scope. Dec 2 03:47:34 localhost systemd[1]: Started libcrun container. Dec 2 03:47:34 localhost podman[93641]: 2025-12-02 08:47:34.630240965 +0000 UTC m=+0.046282123 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97fa7eb2d0580ab8ea0f84e0ec946783783d366923a5943919feb94a484a5106/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97fa7eb2d0580ab8ea0f84e0ec946783783d366923a5943919feb94a484a5106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97fa7eb2d0580ab8ea0f84e0ec946783783d366923a5943919feb94a484a5106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 03:47:34 localhost podman[93641]: 2025-12-02 08:47:34.735851882 +0000 UTC m=+0.151893020 container init 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, version=7, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 2 03:47:34 localhost podman[93641]: 2025-12-02 08:47:34.747903086 +0000 UTC m=+0.163944214 container start 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main) Dec 2 03:47:34 localhost podman[93641]: 2025-12-02 08:47:34.748157423 +0000 UTC m=+0.164198591 container attach 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=) Dec 2 03:47:35 localhost blissful_jang[93656]: [ Dec 2 03:47:35 localhost blissful_jang[93656]: { Dec 2 03:47:35 localhost blissful_jang[93656]: "available": false, Dec 2 03:47:35 localhost blissful_jang[93656]: "ceph_device": false, Dec 2 03:47:35 localhost blissful_jang[93656]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 03:47:35 localhost blissful_jang[93656]: "lsm_data": {}, Dec 2 03:47:35 localhost blissful_jang[93656]: "lvs": [], Dec 2 03:47:35 localhost blissful_jang[93656]: "path": "/dev/sr0", Dec 2 03:47:35 localhost blissful_jang[93656]: "rejected_reasons": [ Dec 2 03:47:35 localhost blissful_jang[93656]: "Insufficient space (<5GB)", Dec 2 03:47:35 localhost blissful_jang[93656]: "Has a FileSystem" Dec 2 03:47:35 localhost blissful_jang[93656]: ], Dec 2 03:47:35 localhost blissful_jang[93656]: "sys_api": { Dec 2 03:47:35 localhost blissful_jang[93656]: "actuators": null, Dec 2 03:47:35 localhost blissful_jang[93656]: "device_nodes": "sr0", Dec 2 03:47:35 localhost blissful_jang[93656]: "human_readable_size": "482.00 KB", Dec 2 03:47:35 localhost blissful_jang[93656]: "id_bus": "ata", Dec 2 03:47:35 localhost blissful_jang[93656]: "model": "QEMU DVD-ROM", Dec 2 03:47:35 localhost blissful_jang[93656]: "nr_requests": "2", Dec 2 03:47:35 localhost blissful_jang[93656]: "partitions": {}, Dec 2 03:47:35 localhost blissful_jang[93656]: "path": "/dev/sr0", Dec 2 03:47:35 localhost blissful_jang[93656]: "removable": "1", Dec 2 03:47:35 localhost blissful_jang[93656]: "rev": "2.5+", Dec 2 03:47:35 localhost blissful_jang[93656]: "ro": "0", Dec 2 03:47:35 localhost blissful_jang[93656]: "rotational": "1", Dec 2 03:47:35 localhost blissful_jang[93656]: "sas_address": "", Dec 2 03:47:35 localhost blissful_jang[93656]: "sas_device_handle": "", Dec 2 03:47:35 localhost blissful_jang[93656]: "scheduler_mode": "mq-deadline", Dec 2 03:47:35 localhost blissful_jang[93656]: "sectors": 0, Dec 2 03:47:35 localhost blissful_jang[93656]: "sectorsize": "2048", Dec 2 03:47:35 localhost blissful_jang[93656]: "size": 493568.0, Dec 2 03:47:35 localhost blissful_jang[93656]: "support_discard": "0", Dec 2 03:47:35 localhost blissful_jang[93656]: "type": "disk", Dec 2 03:47:35 localhost blissful_jang[93656]: "vendor": "QEMU" Dec 2 03:47:35 localhost blissful_jang[93656]: } Dec 2 03:47:35 localhost blissful_jang[93656]: } Dec 2 03:47:35 localhost blissful_jang[93656]: ] Dec 2 03:47:35 localhost systemd[1]: libpod-21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034.scope: Deactivated successfully. Dec 2 03:47:35 localhost podman[93641]: 2025-12-02 08:47:35.692043475 +0000 UTC m=+1.108084603 container died 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, name=rhceph, release=1763362218, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container) Dec 2 03:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-97fa7eb2d0580ab8ea0f84e0ec946783783d366923a5943919feb94a484a5106-merged.mount: Deactivated successfully. Dec 2 03:47:35 localhost podman[95513]: 2025-12-02 08:47:35.782775292 +0000 UTC m=+0.083450933 container remove 21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_jang, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, io.openshift.expose-services=) Dec 2 03:47:35 localhost systemd[1]: libpod-conmon-21e5ea614fce29ed4fe452b65b2ab3f0058dd32793b3da06fa5faf6e9e14f034.scope: Deactivated successfully. Dec 2 03:47:35 localhost systemd[1]: tmp-crun.XaAAmd.mount: Deactivated successfully. Dec 2 03:47:35 localhost podman[95522]: 2025-12-02 08:47:35.86275898 +0000 UTC m=+0.142206840 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git) Dec 2 03:47:35 localhost podman[95522]: 2025-12-02 08:47:35.883161228 +0000 UTC m=+0.162609128 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, tcib_managed=true) Dec 2 03:47:35 localhost podman[95522]: unhealthy Dec 2 03:47:35 localhost podman[95519]: 2025-12-02 08:47:35.840716399 +0000 UTC m=+0.120328453 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:47:35 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:47:35 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:47:35 localhost podman[95519]: 2025-12-02 08:47:35.923114951 +0000 UTC m=+0.202726985 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:47:35 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:47:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:47:44 localhost podman[95591]: 2025-12-02 08:47:44.459685086 +0000 UTC m=+0.090297766 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Dec 2 03:47:44 localhost systemd[1]: tmp-crun.2P7hxy.mount: Deactivated successfully. Dec 2 03:47:44 localhost podman[95590]: 2025-12-02 08:47:44.516173643 +0000 UTC m=+0.148146439 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-collectd-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, release=1761123044, config_id=tripleo_step3) Dec 2 03:47:44 localhost podman[95591]: 2025-12-02 08:47:44.525573926 +0000 UTC m=+0.156186556 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:47:44 localhost podman[95590]: 2025-12-02 08:47:44.529075 +0000 UTC m=+0.161047846 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, release=1761123044, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container) Dec 2 03:47:44 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:47:44 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:47:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:47:54 localhost systemd[1]: tmp-crun.Vl07kP.mount: Deactivated successfully. Dec 2 03:47:54 localhost podman[95629]: 2025-12-02 08:47:54.453244974 +0000 UTC m=+0.097298435 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step1, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team) Dec 2 03:47:54 localhost podman[95629]: 2025-12-02 08:47:54.689182881 +0000 UTC m=+0.333236312 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.12, vcs-type=git) Dec 2 03:47:54 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:48:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:48:03 localhost recover_tripleo_nova_virtqemud[95685]: 62312 Dec 2 03:48:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:48:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:48:03 localhost systemd[1]: tmp-crun.ktcNvC.mount: Deactivated successfully. Dec 2 03:48:03 localhost podman[95662]: 2025-12-02 08:48:03.453759598 +0000 UTC m=+0.084723746 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:48:03 localhost podman[95662]: 2025-12-02 08:48:03.48881646 +0000 UTC m=+0.119780578 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:48:03 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:48:03 localhost podman[95660]: 2025-12-02 08:48:03.513416311 +0000 UTC m=+0.147540714 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 03:48:03 localhost podman[95660]: 2025-12-02 08:48:03.545047821 +0000 UTC m=+0.179172254 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.buildah.version=1.41.4, distribution-scope=public) Dec 2 03:48:03 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:48:03 localhost podman[95659]: 2025-12-02 08:48:03.594849909 +0000 UTC m=+0.228808177 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:48:03 localhost podman[95661]: 2025-12-02 08:48:03.553359854 +0000 UTC m=+0.181604129 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4) Dec 2 03:48:03 localhost podman[95658]: 2025-12-02 08:48:03.654556172 +0000 UTC m=+0.289880497 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044, version=17.1.12, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public) Dec 2 03:48:03 localhost podman[95658]: 2025-12-02 08:48:03.688012921 +0000 UTC m=+0.323337276 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public) Dec 2 03:48:03 localhost podman[95661]: 2025-12-02 08:48:03.686778228 +0000 UTC m=+0.315022453 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true) Dec 2 03:48:03 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:48:03 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:48:03 localhost podman[95659]: 2025-12-02 08:48:03.941655714 +0000 UTC m=+0.575614032 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:48:03 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:48:06 localhost systemd[1]: tmp-crun.GYZbCk.mount: Deactivated successfully. Dec 2 03:48:06 localhost podman[95778]: 2025-12-02 08:48:06.449727149 +0000 UTC m=+0.089516016 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com) Dec 2 03:48:06 localhost podman[95779]: 2025-12-02 08:48:06.480788963 +0000 UTC m=+0.121231227 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller) Dec 2 03:48:06 localhost podman[95779]: 2025-12-02 08:48:06.505975329 +0000 UTC m=+0.146417653 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Dec 2 03:48:06 localhost podman[95779]: unhealthy Dec 2 03:48:06 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:06 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:48:06 localhost podman[95778]: 2025-12-02 08:48:06.531064763 +0000 UTC m=+0.170853680 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 2 03:48:06 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Deactivated successfully. Dec 2 03:48:07 localhost systemd[1]: tmp-crun.VFUmya.mount: Deactivated successfully. Dec 2 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:48:15 localhost systemd[1]: tmp-crun.TccHNc.mount: Deactivated successfully. Dec 2 03:48:15 localhost podman[95829]: 2025-12-02 08:48:15.441397688 +0000 UTC m=+0.078924861 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, container_name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:48:15 localhost podman[95828]: 2025-12-02 08:48:15.458181009 +0000 UTC m=+0.094276574 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, container_name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:48:15 localhost podman[95829]: 2025-12-02 08:48:15.477752374 +0000 UTC m=+0.115279507 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 2 03:48:15 localhost podman[95828]: 2025-12-02 08:48:15.478445743 +0000 UTC m=+0.114541308 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:48:15 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:48:15 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:48:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:48:25 localhost podman[95867]: 2025-12-02 08:48:25.446538685 +0000 UTC m=+0.089825274 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:48:25 localhost podman[95867]: 2025-12-02 08:48:25.674056386 +0000 UTC m=+0.317342975 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd) Dec 2 03:48:25 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:48:34 localhost podman[95900]: 2025-12-02 08:48:34.456944468 +0000 UTC m=+0.082748594 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 2 03:48:34 localhost systemd[1]: tmp-crun.u6Tgem.mount: Deactivated successfully. Dec 2 03:48:34 localhost podman[95902]: 2025-12-02 08:48:34.518867041 +0000 UTC m=+0.140112505 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:48:34 localhost podman[95900]: 2025-12-02 08:48:34.536374251 +0000 UTC m=+0.162178427 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:48:34 localhost systemd[1]: tmp-crun.VWwctY.mount: Deactivated successfully. Dec 2 03:48:34 localhost podman[95902]: 2025-12-02 08:48:34.579051187 +0000 UTC m=+0.200296691 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:48:34 localhost podman[95908]: 2025-12-02 08:48:34.579237522 +0000 UTC m=+0.198108682 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044) Dec 2 03:48:34 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:48:34 localhost podman[95908]: 2025-12-02 08:48:34.606849583 +0000 UTC m=+0.225720733 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true) Dec 2 03:48:34 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:48:34 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:48:34 localhost podman[95898]: 2025-12-02 08:48:34.689677388 +0000 UTC m=+0.321509877 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, tcib_managed=true, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=logrotate_crond, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:48:34 localhost podman[95899]: 2025-12-02 08:48:34.728905912 +0000 UTC m=+0.357347239 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:48:34 localhost podman[95898]: 2025-12-02 08:48:34.747541962 +0000 UTC m=+0.379374430 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team) Dec 2 03:48:34 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:48:35 localhost podman[95899]: 2025-12-02 08:48:35.120110629 +0000 UTC m=+0.748551976 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:48:35 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:48:37 localhost podman[96020]: 2025-12-02 08:48:37.440852142 +0000 UTC m=+0.083323409 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:48:37 localhost podman[96020]: 2025-12-02 08:48:37.490091504 +0000 UTC m=+0.132562751 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:48:37 localhost podman[96020]: unhealthy Dec 2 03:48:37 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:37 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:48:37 localhost podman[96021]: 2025-12-02 08:48:37.489879419 +0000 UTC m=+0.130390833 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible) Dec 2 03:48:37 localhost podman[96021]: 2025-12-02 08:48:37.570803272 +0000 UTC m=+0.211314676 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:48:37 localhost podman[96021]: unhealthy Dec 2 03:48:37 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:48:37 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:48:46 localhost podman[96190]: 2025-12-02 08:48:46.4604208 +0000 UTC m=+0.098158507 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 2 03:48:46 localhost podman[96190]: 2025-12-02 08:48:46.468798955 +0000 UTC m=+0.106536642 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:48:46 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:48:46 localhost podman[96189]: 2025-12-02 08:48:46.434636618 +0000 UTC m=+0.075398056 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 03:48:46 localhost podman[96189]: 2025-12-02 08:48:46.519366774 +0000 UTC m=+0.160128182 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Dec 2 03:48:46 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:48:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:48:56 localhost systemd[1]: tmp-crun.C11yIt.mount: Deactivated successfully. Dec 2 03:48:56 localhost podman[96228]: 2025-12-02 08:48:56.450001911 +0000 UTC m=+0.094168870 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_id=tripleo_step1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Dec 2 03:48:56 localhost podman[96228]: 2025-12-02 08:48:56.643055996 +0000 UTC m=+0.287222895 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git) Dec 2 03:48:56 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:49:05 localhost podman[96258]: 2025-12-02 08:49:05.454303559 +0000 UTC m=+0.090934374 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=logrotate_crond, version=17.1.12) Dec 2 03:49:05 localhost podman[96260]: 2025-12-02 08:49:05.508639788 +0000 UTC m=+0.139872277 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 2 03:49:05 localhost podman[96259]: 2025-12-02 08:49:05.479025793 +0000 UTC m=+0.111947448 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:49:05 localhost podman[96258]: 2025-12-02 08:49:05.539058736 +0000 UTC m=+0.175689581 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git) Dec 2 03:49:05 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:49:05 localhost podman[96260]: 2025-12-02 08:49:05.560884262 +0000 UTC m=+0.192116741 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, version=17.1.12) Dec 2 03:49:05 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:49:05 localhost podman[96267]: 2025-12-02 08:49:05.605924582 +0000 UTC m=+0.227595945 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 2 03:49:05 localhost podman[96267]: 2025-12-02 08:49:05.634953021 +0000 UTC m=+0.256624344 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, container_name=ceilometer_agent_ipmi) Dec 2 03:49:05 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:49:05 localhost podman[96261]: 2025-12-02 08:49:05.7335728 +0000 UTC m=+0.360061542 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 2 03:49:05 localhost podman[96261]: 2025-12-02 08:49:05.761979563 +0000 UTC m=+0.388468235 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Dec 2 03:49:05 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:49:05 localhost podman[96259]: 2025-12-02 08:49:05.834131361 +0000 UTC m=+0.467053046 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044) Dec 2 03:49:05 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:49:06 localhost systemd[1]: tmp-crun.zlmAA6.mount: Deactivated successfully. Dec 2 03:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:49:08 localhost podman[96376]: 2025-12-02 08:49:08.442441118 +0000 UTC m=+0.078583942 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12) Dec 2 03:49:08 localhost podman[96376]: 2025-12-02 08:49:08.487073187 +0000 UTC m=+0.123215941 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:49:08 localhost podman[96376]: unhealthy Dec 2 03:49:08 localhost podman[96377]: 2025-12-02 08:49:08.498199395 +0000 UTC m=+0.138174962 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:49:08 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:08 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:49:08 localhost podman[96377]: 2025-12-02 08:49:08.542195538 +0000 UTC m=+0.182171085 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1) Dec 2 03:49:08 localhost podman[96377]: unhealthy Dec 2 03:49:08 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:08 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:49:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:49:16 localhost recover_tripleo_nova_virtqemud[96418]: 62312 Dec 2 03:49:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:49:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:49:17 localhost systemd[1]: tmp-crun.OMnJxl.mount: Deactivated successfully. Dec 2 03:49:17 localhost podman[96419]: 2025-12-02 08:49:17.460915397 +0000 UTC m=+0.101794666 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:49:17 localhost podman[96420]: 2025-12-02 08:49:17.499931944 +0000 UTC m=+0.138158741 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible) Dec 2 03:49:17 localhost podman[96420]: 2025-12-02 08:49:17.53401907 +0000 UTC m=+0.172245827 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 03:49:17 localhost podman[96419]: 2025-12-02 08:49:17.54186793 +0000 UTC m=+0.182747229 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Dec 2 03:49:17 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:49:17 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:49:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:49:27 localhost systemd[1]: tmp-crun.WvKnvh.mount: Deactivated successfully. Dec 2 03:49:27 localhost podman[96461]: 2025-12-02 08:49:27.449977933 +0000 UTC m=+0.093153002 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:49:27 localhost podman[96461]: 2025-12-02 08:49:27.639400771 +0000 UTC m=+0.282575860 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 2 03:49:27 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:49:36 localhost podman[96488]: 2025-12-02 08:49:36.450912588 +0000 UTC m=+0.087408068 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Dec 2 03:49:36 localhost podman[96488]: 2025-12-02 08:49:36.485013555 +0000 UTC m=+0.121509025 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team) Dec 2 03:49:36 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:49:36 localhost systemd[1]: tmp-crun.eh6ZTa.mount: Deactivated successfully. Dec 2 03:49:36 localhost podman[96490]: 2025-12-02 08:49:36.560890883 +0000 UTC m=+0.190998452 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, container_name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044) Dec 2 03:49:36 localhost podman[96490]: 2025-12-02 08:49:36.59205551 +0000 UTC m=+0.222163049 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, container_name=nova_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:49:36 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:49:36 localhost podman[96503]: 2025-12-02 08:49:36.662533522 +0000 UTC m=+0.282689903 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:49:36 localhost podman[96489]: 2025-12-02 08:49:36.709076713 +0000 UTC m=+0.342924992 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:49:36 localhost podman[96491]: 2025-12-02 08:49:36.716937633 +0000 UTC m=+0.344271038 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute) Dec 2 03:49:36 localhost podman[96503]: 2025-12-02 08:49:36.739698544 +0000 UTC m=+0.359854925 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:49:36 localhost podman[96491]: 2025-12-02 08:49:36.749225161 +0000 UTC m=+0.376558596 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, container_name=ceilometer_agent_compute, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:49:36 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:49:36 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:49:37 localhost podman[96489]: 2025-12-02 08:49:37.115182009 +0000 UTC m=+0.749030298 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:49:37 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:49:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:49:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:49:39 localhost podman[96611]: 2025-12-02 08:49:39.441650017 +0000 UTC m=+0.083726310 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 2 03:49:39 localhost podman[96611]: 2025-12-02 08:49:39.48010242 +0000 UTC m=+0.122178723 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:49:39 localhost podman[96611]: unhealthy Dec 2 03:49:39 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:39 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:49:39 localhost systemd[1]: tmp-crun.I7l1J6.mount: Deactivated successfully. Dec 2 03:49:39 localhost podman[96612]: 2025-12-02 08:49:39.500460606 +0000 UTC m=+0.139430026 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible) Dec 2 03:49:39 localhost podman[96612]: 2025-12-02 08:49:39.520988437 +0000 UTC m=+0.159957797 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible) Dec 2 03:49:39 localhost podman[96612]: unhealthy Dec 2 03:49:39 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:49:39 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:49:48 localhost podman[96732]: 2025-12-02 08:49:48.448430381 +0000 UTC m=+0.082098626 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:49:48 localhost podman[96732]: 2025-12-02 08:49:48.461983605 +0000 UTC m=+0.095651840 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:49:48 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:49:48 localhost podman[96731]: 2025-12-02 08:49:48.54255793 +0000 UTC m=+0.179120723 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:49:48 localhost podman[96731]: 2025-12-02 08:49:48.555041784 +0000 UTC m=+0.191604637 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:49:48 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:49:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:49:58 localhost systemd[1]: tmp-crun.VfTIKe.mount: Deactivated successfully. Dec 2 03:49:58 localhost podman[96767]: 2025-12-02 08:49:58.437818868 +0000 UTC m=+0.076035834 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=metrics_qdr, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 2 03:49:58 localhost podman[96767]: 2025-12-02 08:49:58.658108225 +0000 UTC m=+0.296325211 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 2 03:49:58 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:50:07 localhost systemd[1]: tmp-crun.eRJhKh.mount: Deactivated successfully. Dec 2 03:50:07 localhost podman[96797]: 2025-12-02 08:50:07.430573004 +0000 UTC m=+0.069643862 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:50:07 localhost podman[96796]: 2025-12-02 08:50:07.433871233 +0000 UTC m=+0.077401440 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible) Dec 2 03:50:07 localhost podman[96798]: 2025-12-02 08:50:07.472556852 +0000 UTC m=+0.110142329 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12) Dec 2 03:50:07 localhost podman[96805]: 2025-12-02 08:50:07.458684989 +0000 UTC m=+0.087614204 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 2 03:50:07 localhost podman[96796]: 2025-12-02 08:50:07.514479108 +0000 UTC m=+0.158009335 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1) Dec 2 03:50:07 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:50:07 localhost podman[96805]: 2025-12-02 08:50:07.541924365 +0000 UTC m=+0.170853600 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:50:07 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:50:07 localhost podman[96804]: 2025-12-02 08:50:07.519243646 +0000 UTC m=+0.154908702 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Dec 2 03:50:07 localhost podman[96798]: 2025-12-02 08:50:07.566145305 +0000 UTC m=+0.203730812 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 2 03:50:07 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:50:07 localhost podman[96804]: 2025-12-02 08:50:07.604380652 +0000 UTC m=+0.240045738 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:50:07 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:50:07 localhost podman[96797]: 2025-12-02 08:50:07.827118155 +0000 UTC m=+0.466189023 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:07 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:50:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:50:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:50:10 localhost systemd[1]: tmp-crun.Bq4GEO.mount: Deactivated successfully. Dec 2 03:50:10 localhost podman[96914]: 2025-12-02 08:50:10.441868805 +0000 UTC m=+0.080662768 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 03:50:10 localhost podman[96914]: 2025-12-02 08:50:10.45699127 +0000 UTC m=+0.095785243 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:50:10 localhost podman[96914]: unhealthy Dec 2 03:50:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:50:10 localhost podman[96915]: 2025-12-02 08:50:10.538708846 +0000 UTC m=+0.175813073 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-type=git) Dec 2 03:50:10 localhost podman[96915]: 2025-12-02 08:50:10.557001037 +0000 UTC m=+0.194105284 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4) Dec 2 03:50:10 localhost podman[96915]: unhealthy Dec 2 03:50:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:50:19 localhost systemd[1]: tmp-crun.Jh65vu.mount: Deactivated successfully. Dec 2 03:50:19 localhost podman[96953]: 2025-12-02 08:50:19.444772084 +0000 UTC m=+0.082443916 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z) Dec 2 03:50:19 localhost podman[96953]: 2025-12-02 08:50:19.484199733 +0000 UTC m=+0.121871565 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 2 03:50:19 localhost systemd[1]: tmp-crun.XnZAfR.mount: Deactivated successfully. Dec 2 03:50:19 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:50:19 localhost podman[96954]: 2025-12-02 08:50:19.502112744 +0000 UTC m=+0.135097899 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:50:19 localhost podman[96954]: 2025-12-02 08:50:19.516046148 +0000 UTC m=+0.149031303 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:50:19 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:50:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:50:29 localhost podman[96991]: 2025-12-02 08:50:29.444035468 +0000 UTC m=+0.081209632 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:50:29 localhost podman[96991]: 2025-12-02 08:50:29.652038354 +0000 UTC m=+0.289212488 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:29 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:50:38 localhost podman[97024]: 2025-12-02 08:50:38.465647711 +0000 UTC m=+0.097801628 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=nova_compute) Dec 2 03:50:38 localhost podman[97031]: 2025-12-02 08:50:38.510178666 +0000 UTC m=+0.136831835 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z) Dec 2 03:50:38 localhost podman[97024]: 2025-12-02 08:50:38.521310575 +0000 UTC m=+0.153464492 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 2 03:50:38 localhost systemd[1]: tmp-crun.3QYjyQ.mount: Deactivated successfully. Dec 2 03:50:38 localhost podman[97031]: 2025-12-02 08:50:38.56504352 +0000 UTC m=+0.191696719 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Dec 2 03:50:38 localhost podman[97022]: 2025-12-02 08:50:38.567284171 +0000 UTC m=+0.204778401 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z) Dec 2 03:50:38 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:50:38 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:50:38 localhost podman[97022]: 2025-12-02 08:50:38.648158003 +0000 UTC m=+0.285652303 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044) Dec 2 03:50:38 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:50:38 localhost podman[97023]: 2025-12-02 08:50:38.661769099 +0000 UTC m=+0.297565544 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, container_name=nova_migration_target, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 2 03:50:38 localhost podman[97025]: 2025-12-02 08:50:38.716072207 +0000 UTC m=+0.343193349 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:50:38 localhost podman[97025]: 2025-12-02 08:50:38.745010264 +0000 UTC m=+0.372131436 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team) Dec 2 03:50:38 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:50:39 localhost podman[97023]: 2025-12-02 08:50:39.029539897 +0000 UTC m=+0.665336362 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:50:39 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:50:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:50:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:50:41 localhost systemd[1]: tmp-crun.gU6oNv.mount: Deactivated successfully. Dec 2 03:50:41 localhost podman[97143]: 2025-12-02 08:50:41.447602953 +0000 UTC m=+0.090501611 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:50:41 localhost podman[97143]: 2025-12-02 08:50:41.485677856 +0000 UTC m=+0.128576474 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_controller) Dec 2 03:50:41 localhost podman[97142]: 2025-12-02 08:50:41.484759871 +0000 UTC m=+0.129544520 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 2 03:50:41 localhost podman[97143]: unhealthy Dec 2 03:50:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:50:41 localhost podman[97142]: 2025-12-02 08:50:41.571346047 +0000 UTC m=+0.216130716 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12) Dec 2 03:50:41 localhost podman[97142]: unhealthy Dec 2 03:50:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:50:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:50:43 localhost systemd[1]: tmp-crun.qF8tpw.mount: Deactivated successfully. Dec 2 03:50:43 localhost podman[97282]: 2025-12-02 08:50:43.705139619 +0000 UTC m=+0.104926250 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, RELEASE=main, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 2 03:50:43 localhost podman[97282]: 2025-12-02 08:50:43.835018017 +0000 UTC m=+0.234804688 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1763362218, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, name=rhceph) Dec 2 03:50:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:50:46 localhost recover_tripleo_nova_virtqemud[97425]: 62312 Dec 2 03:50:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:50:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:50:50 localhost systemd[1]: tmp-crun.lJE2qB.mount: Deactivated successfully. Dec 2 03:50:50 localhost podman[97427]: 2025-12-02 08:50:50.459098865 +0000 UTC m=+0.098230400 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com) Dec 2 03:50:50 localhost podman[97427]: 2025-12-02 08:50:50.469956596 +0000 UTC m=+0.109088101 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:50:50 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:50:50 localhost podman[97426]: 2025-12-02 08:50:50.554522697 +0000 UTC m=+0.194087593 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, container_name=collectd, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd) Dec 2 03:50:50 localhost podman[97426]: 2025-12-02 08:50:50.568598785 +0000 UTC m=+0.208163661 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc.) Dec 2 03:50:50 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:51:00 localhost podman[97464]: 2025-12-02 08:51:00.461272784 +0000 UTC m=+0.100128591 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, tcib_managed=true, config_id=tripleo_step1) Dec 2 03:51:00 localhost podman[97464]: 2025-12-02 08:51:00.660121505 +0000 UTC m=+0.298977332 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:51:00 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:51:09 localhost podman[97494]: 2025-12-02 08:51:09.451187817 +0000 UTC m=+0.082953728 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:51:09 localhost podman[97496]: 2025-12-02 08:51:09.499832244 +0000 UTC m=+0.127376573 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Dec 2 03:51:09 localhost systemd[1]: tmp-crun.jRrpi3.mount: Deactivated successfully. Dec 2 03:51:09 localhost podman[97496]: 2025-12-02 08:51:09.554722798 +0000 UTC m=+0.182267137 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Dec 2 03:51:09 localhost podman[97495]: 2025-12-02 08:51:09.554119282 +0000 UTC m=+0.182835232 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:51:09 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:51:09 localhost podman[97502]: 2025-12-02 08:51:09.612015517 +0000 UTC m=+0.234518490 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 03:51:09 localhost podman[97495]: 2025-12-02 08:51:09.641101308 +0000 UTC m=+0.269817288 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:51:09 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:51:09 localhost podman[97493]: 2025-12-02 08:51:09.653634875 +0000 UTC m=+0.289457306 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 03:51:09 localhost podman[97502]: 2025-12-02 08:51:09.665044412 +0000 UTC m=+0.287547395 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Dec 2 03:51:09 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:51:09 localhost podman[97493]: 2025-12-02 08:51:09.715431475 +0000 UTC m=+0.351253906 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, architecture=x86_64) Dec 2 03:51:09 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:51:09 localhost podman[97494]: 2025-12-02 08:51:09.805083963 +0000 UTC m=+0.436849904 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible) Dec 2 03:51:09 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:51:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:51:12 localhost podman[97609]: 2025-12-02 08:51:12.446899201 +0000 UTC m=+0.085993471 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 03:51:12 localhost systemd[1]: tmp-crun.evKj3I.mount: Deactivated successfully. Dec 2 03:51:12 localhost podman[97610]: 2025-12-02 08:51:12.493631345 +0000 UTC m=+0.130637689 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:51:12 localhost podman[97609]: 2025-12-02 08:51:12.514552728 +0000 UTC m=+0.153646998 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:51:12 localhost podman[97609]: unhealthy Dec 2 03:51:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:51:12 localhost podman[97610]: 2025-12-02 08:51:12.538005967 +0000 UTC m=+0.175012341 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team) Dec 2 03:51:12 localhost podman[97610]: unhealthy Dec 2 03:51:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:51:21 localhost systemd[1]: tmp-crun.QmyPiz.mount: Deactivated successfully. Dec 2 03:51:21 localhost podman[97650]: 2025-12-02 08:51:21.445081114 +0000 UTC m=+0.084493820 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 03:51:21 localhost podman[97650]: 2025-12-02 08:51:21.48891092 +0000 UTC m=+0.128323606 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3) Dec 2 03:51:21 localhost systemd[1]: tmp-crun.vDrH7u.mount: Deactivated successfully. Dec 2 03:51:21 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:51:21 localhost podman[97649]: 2025-12-02 08:51:21.495455357 +0000 UTC m=+0.135947463 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, container_name=collectd, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true) Dec 2 03:51:21 localhost podman[97649]: 2025-12-02 08:51:21.576500194 +0000 UTC m=+0.216992330 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_id=tripleo_step3, architecture=x86_64, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:51:21 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:51:31 localhost systemd[1]: tmp-crun.QnmAJd.mount: Deactivated successfully. Dec 2 03:51:31 localhost podman[97689]: 2025-12-02 08:51:31.443440634 +0000 UTC m=+0.088944821 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:51:31 localhost podman[97689]: 2025-12-02 08:51:31.639087929 +0000 UTC m=+0.284592166 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:51:31 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:51:40 localhost systemd[1]: tmp-crun.V3MtIl.mount: Deactivated successfully. Dec 2 03:51:40 localhost systemd[1]: tmp-crun.gqWpOW.mount: Deactivated successfully. Dec 2 03:51:40 localhost podman[97725]: 2025-12-02 08:51:40.485878244 +0000 UTC m=+0.109746530 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:51:40 localhost podman[97718]: 2025-12-02 08:51:40.496625552 +0000 UTC m=+0.130708382 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Dec 2 03:51:40 localhost podman[97720]: 2025-12-02 08:51:40.445993382 +0000 UTC m=+0.077579024 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true) Dec 2 03:51:40 localhost podman[97725]: 2025-12-02 08:51:40.536821742 +0000 UTC m=+0.160690038 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 03:51:40 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:51:40 localhost podman[97717]: 2025-12-02 08:51:40.5520553 +0000 UTC m=+0.189712795 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Dec 2 03:51:40 localhost podman[97719]: 2025-12-02 08:51:40.60974388 +0000 UTC m=+0.240103820 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:51:40 localhost podman[97720]: 2025-12-02 08:51:40.62763138 +0000 UTC m=+0.259216992 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com) Dec 2 03:51:40 localhost podman[97717]: 2025-12-02 08:51:40.63468952 +0000 UTC m=+0.272347035 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, name=rhosp17/openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:51:40 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:51:40 localhost podman[97719]: 2025-12-02 08:51:40.63951546 +0000 UTC m=+0.269875400 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:51:40 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:51:40 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:51:40 localhost podman[97718]: 2025-12-02 08:51:40.841445304 +0000 UTC m=+0.475528154 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:51:40 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:51:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:51:43 localhost podman[97839]: 2025-12-02 08:51:43.44377124 +0000 UTC m=+0.083278539 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:51:43 localhost podman[97839]: 2025-12-02 08:51:43.487079902 +0000 UTC m=+0.126587191 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:14:25Z, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:51:43 localhost systemd[1]: tmp-crun.AQ10dS.mount: Deactivated successfully. Dec 2 03:51:43 localhost podman[97839]: unhealthy Dec 2 03:51:43 localhost podman[97840]: 2025-12-02 08:51:43.497545214 +0000 UTC m=+0.130907838 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com) Dec 2 03:51:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:51:43 localhost podman[97840]: 2025-12-02 08:51:43.542272295 +0000 UTC m=+0.175634909 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Dec 2 03:51:43 localhost podman[97840]: unhealthy Dec 2 03:51:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:51:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:51:52 localhost podman[97957]: 2025-12-02 08:51:52.481538803 +0000 UTC m=+0.118304869 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:51:52 localhost podman[97957]: 2025-12-02 08:51:52.498075798 +0000 UTC m=+0.134841864 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-collectd, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:51:52 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:51:52 localhost podman[97958]: 2025-12-02 08:51:52.546824757 +0000 UTC m=+0.183522460 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 2 03:51:52 localhost podman[97958]: 2025-12-02 08:51:52.563104864 +0000 UTC m=+0.199802567 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 03:51:52 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:52:02 localhost systemd[1]: tmp-crun.01JRCM.mount: Deactivated successfully. Dec 2 03:52:02 localhost podman[97996]: 2025-12-02 08:52:02.46683891 +0000 UTC m=+0.099697858 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:52:02 localhost podman[97996]: 2025-12-02 08:52:02.700822205 +0000 UTC m=+0.333681113 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.expose-services=) Dec 2 03:52:02 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:52:11 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:52:11 localhost recover_tripleo_nova_virtqemud[98056]: 62312 Dec 2 03:52:11 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:52:11 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:52:11 localhost systemd[1]: tmp-crun.z6j0V7.mount: Deactivated successfully. Dec 2 03:52:11 localhost podman[98026]: 2025-12-02 08:52:11.45371258 +0000 UTC m=+0.085740853 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:52:11 localhost systemd[1]: tmp-crun.7LYMog.mount: Deactivated successfully. Dec 2 03:52:11 localhost podman[98024]: 2025-12-02 08:52:11.469549076 +0000 UTC m=+0.105855874 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:52:11 localhost podman[98026]: 2025-12-02 08:52:11.481014684 +0000 UTC m=+0.113042987 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:52:11 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:52:11 localhost podman[98024]: 2025-12-02 08:52:11.506162389 +0000 UTC m=+0.142469237 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:52:11 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:52:11 localhost podman[98030]: 2025-12-02 08:52:11.518251924 +0000 UTC m=+0.144716937 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 03:52:11 localhost podman[98038]: 2025-12-02 08:52:11.570751194 +0000 UTC m=+0.193985521 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public) Dec 2 03:52:11 localhost podman[98030]: 2025-12-02 08:52:11.57691427 +0000 UTC m=+0.203379293 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute) Dec 2 03:52:11 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:52:11 localhost podman[98025]: 2025-12-02 08:52:11.623267925 +0000 UTC m=+0.257477557 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, url=https://www.redhat.com, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 2 03:52:11 localhost podman[98038]: 2025-12-02 08:52:11.627973721 +0000 UTC m=+0.251208018 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 2 03:52:11 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:52:12 localhost podman[98025]: 2025-12-02 08:52:12.036327269 +0000 UTC m=+0.670536921 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z) Dec 2 03:52:12 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:52:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:52:14 localhost systemd[1]: tmp-crun.sT02EG.mount: Deactivated successfully. Dec 2 03:52:14 localhost podman[98143]: 2025-12-02 08:52:14.470992771 +0000 UTC m=+0.109038979 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 03:52:14 localhost systemd[1]: tmp-crun.cEBywn.mount: Deactivated successfully. Dec 2 03:52:14 localhost podman[98144]: 2025-12-02 08:52:14.520746858 +0000 UTC m=+0.156772982 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, batch=17.1_20251118.1) Dec 2 03:52:14 localhost podman[98143]: 2025-12-02 08:52:14.537350944 +0000 UTC m=+0.175397162 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team) Dec 2 03:52:14 localhost podman[98143]: unhealthy Dec 2 03:52:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:52:14 localhost podman[98144]: 2025-12-02 08:52:14.565183401 +0000 UTC m=+0.201209526 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:52:14 localhost podman[98144]: unhealthy Dec 2 03:52:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:52:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:52:23 localhost podman[98183]: 2025-12-02 08:52:23.440576245 +0000 UTC m=+0.080722369 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:52:23 localhost podman[98183]: 2025-12-02 08:52:23.448364285 +0000 UTC m=+0.088510469 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 2 03:52:23 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:52:23 localhost podman[98184]: 2025-12-02 08:52:23.492338946 +0000 UTC m=+0.129237653 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:52:23 localhost podman[98184]: 2025-12-02 08:52:23.502792746 +0000 UTC m=+0.139691393 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:23 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:52:33 localhost podman[98222]: 2025-12-02 08:52:33.454439438 +0000 UTC m=+0.094274073 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:52:33 localhost podman[98222]: 2025-12-02 08:52:33.643043204 +0000 UTC m=+0.282877849 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 03:52:33 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:52:42 localhost podman[98253]: 2025-12-02 08:52:42.469323669 +0000 UTC m=+0.095093725 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:42 localhost systemd[1]: tmp-crun.AIX1Yo.mount: Deactivated successfully. Dec 2 03:52:42 localhost podman[98263]: 2025-12-02 08:52:42.533234915 +0000 UTC m=+0.152066435 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:52:42 localhost podman[98257]: 2025-12-02 08:52:42.57177031 +0000 UTC m=+0.194387971 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true) Dec 2 03:52:42 localhost podman[98253]: 2025-12-02 08:52:42.60375455 +0000 UTC m=+0.229524626 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:52:42 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:52:42 localhost podman[98252]: 2025-12-02 08:52:42.622788781 +0000 UTC m=+0.252075102 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:42 localhost podman[98251]: 2025-12-02 08:52:42.674440728 +0000 UTC m=+0.306776471 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:32Z) Dec 2 03:52:42 localhost podman[98263]: 2025-12-02 08:52:42.691732263 +0000 UTC m=+0.310563813 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:52:42 localhost podman[98257]: 2025-12-02 08:52:42.701771633 +0000 UTC m=+0.324389204 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64) Dec 2 03:52:42 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:52:42 localhost podman[98251]: 2025-12-02 08:52:42.70986075 +0000 UTC m=+0.342196463 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:52:42 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:52:42 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:52:42 localhost podman[98252]: 2025-12-02 08:52:42.991812052 +0000 UTC m=+0.621098353 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 2 03:52:43 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:52:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:52:45 localhost podman[98371]: 2025-12-02 08:52:45.446923244 +0000 UTC m=+0.080954426 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:52:45 localhost systemd[1]: tmp-crun.1UDsWa.mount: Deactivated successfully. Dec 2 03:52:45 localhost podman[98371]: 2025-12-02 08:52:45.461840794 +0000 UTC m=+0.095871986 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:52:45 localhost podman[98371]: unhealthy Dec 2 03:52:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:52:45 localhost systemd[1]: tmp-crun.3gsea6.mount: Deactivated successfully. Dec 2 03:52:45 localhost podman[98370]: 2025-12-02 08:52:45.548730758 +0000 UTC m=+0.186801179 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, version=17.1.12, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 03:52:45 localhost podman[98370]: 2025-12-02 08:52:45.591080515 +0000 UTC m=+0.229150896 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:52:45 localhost podman[98370]: unhealthy Dec 2 03:52:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:52:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:52:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:52:54 localhost podman[98489]: 2025-12-02 08:52:54.442270739 +0000 UTC m=+0.081767178 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Dec 2 03:52:54 localhost podman[98489]: 2025-12-02 08:52:54.453943233 +0000 UTC m=+0.093439672 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:44:13Z) Dec 2 03:52:54 localhost podman[98488]: 2025-12-02 08:52:54.489431655 +0000 UTC m=+0.128391999 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 03:52:54 localhost podman[98488]: 2025-12-02 08:52:54.500289977 +0000 UTC m=+0.139250311 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, url=https://www.redhat.com) Dec 2 03:52:54 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:52:54 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:53:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:53:04 localhost podman[98528]: 2025-12-02 08:53:04.435978262 +0000 UTC m=+0.080260007 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64) Dec 2 03:53:04 localhost podman[98528]: 2025-12-02 08:53:04.610212491 +0000 UTC m=+0.254494156 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:53:04 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:53:13 localhost podman[98556]: 2025-12-02 08:53:13.45546477 +0000 UTC m=+0.095717222 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, release=1761123044, container_name=logrotate_crond, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:53:13 localhost systemd[1]: tmp-crun.7fdXtN.mount: Deactivated successfully. Dec 2 03:53:13 localhost podman[98558]: 2025-12-02 08:53:13.51914586 +0000 UTC m=+0.153385850 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:53:13 localhost podman[98558]: 2025-12-02 08:53:13.548063897 +0000 UTC m=+0.182303927 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step5, container_name=nova_compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 03:53:13 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:53:13 localhost podman[98559]: 2025-12-02 08:53:13.569848473 +0000 UTC m=+0.199970103 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, vcs-type=git) Dec 2 03:53:13 localhost podman[98559]: 2025-12-02 08:53:13.62229213 +0000 UTC m=+0.252413770 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:11:48Z, architecture=x86_64, distribution-scope=public, release=1761123044) Dec 2 03:53:13 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:53:13 localhost podman[98557]: 2025-12-02 08:53:13.623668958 +0000 UTC m=+0.260971600 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-type=git) Dec 2 03:53:13 localhost podman[98556]: 2025-12-02 08:53:13.696205986 +0000 UTC m=+0.336458408 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:53:13 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:53:13 localhost podman[98569]: 2025-12-02 08:53:13.676356923 +0000 UTC m=+0.300501023 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64) Dec 2 03:53:13 localhost podman[98569]: 2025-12-02 08:53:13.757492171 +0000 UTC m=+0.381636301 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:53:13 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:53:13 localhost podman[98557]: 2025-12-02 08:53:13.975058705 +0000 UTC m=+0.612361387 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=) Dec 2 03:53:13 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:53:14 localhost systemd[1]: tmp-crun.5fV9ZS.mount: Deactivated successfully. Dec 2 03:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:53:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:53:16 localhost systemd[1]: tmp-crun.PVWsrU.mount: Deactivated successfully. Dec 2 03:53:16 localhost podman[98678]: 2025-12-02 08:53:16.451789559 +0000 UTC m=+0.093956734 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:53:16 localhost podman[98678]: 2025-12-02 08:53:16.491123885 +0000 UTC m=+0.133291040 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 2 03:53:16 localhost podman[98678]: unhealthy Dec 2 03:53:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:53:16 localhost podman[98677]: 2025-12-02 08:53:16.539085153 +0000 UTC m=+0.183423197 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Dec 2 03:53:16 localhost podman[98677]: 2025-12-02 08:53:16.560087708 +0000 UTC m=+0.204425832 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, vcs-type=git) Dec 2 03:53:16 localhost podman[98677]: unhealthy Dec 2 03:53:16 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:16 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:53:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:53:25 localhost systemd[1]: tmp-crun.WZMH7P.mount: Deactivated successfully. Dec 2 03:53:25 localhost podman[98717]: 2025-12-02 08:53:25.480718548 +0000 UTC m=+0.120170379 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044) Dec 2 03:53:25 localhost systemd[1]: tmp-crun.4y1e3G.mount: Deactivated successfully. Dec 2 03:53:25 localhost podman[98718]: 2025-12-02 08:53:25.503691945 +0000 UTC m=+0.141027519 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 03:53:25 localhost podman[98717]: 2025-12-02 08:53:25.525350757 +0000 UTC m=+0.164802638 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:53:25 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:53:25 localhost podman[98718]: 2025-12-02 08:53:25.539276121 +0000 UTC m=+0.176611695 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:53:25 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:53:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:53:35 localhost podman[98757]: 2025-12-02 08:53:35.430793873 +0000 UTC m=+0.074124470 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:53:35 localhost podman[98757]: 2025-12-02 08:53:35.618795934 +0000 UTC m=+0.262126511 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1) Dec 2 03:53:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:53:44 localhost systemd[1]: tmp-crun.OtnRrG.mount: Deactivated successfully. Dec 2 03:53:44 localhost podman[98803]: 2025-12-02 08:53:44.489487934 +0000 UTC m=+0.111598882 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-type=git, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:53:44 localhost podman[98788]: 2025-12-02 08:53:44.457851619 +0000 UTC m=+0.096086888 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.buildah.version=1.41.4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 03:53:44 localhost podman[98790]: 2025-12-02 08:53:44.521691054 +0000 UTC m=+0.154399655 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:53:44 localhost podman[98789]: 2025-12-02 08:53:44.563502171 +0000 UTC m=+0.195252936 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true) Dec 2 03:53:44 localhost podman[98790]: 2025-12-02 08:53:44.573037036 +0000 UTC m=+0.205745627 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:53:44 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:53:44 localhost podman[98788]: 2025-12-02 08:53:44.595600209 +0000 UTC m=+0.233835488 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=) Dec 2 03:53:44 localhost podman[98803]: 2025-12-02 08:53:44.595836295 +0000 UTC m=+0.217947273 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 2 03:53:44 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:53:44 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:53:44 localhost podman[98796]: 2025-12-02 08:53:44.724323037 +0000 UTC m=+0.354750737 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:53:44 localhost podman[98796]: 2025-12-02 08:53:44.747837935 +0000 UTC m=+0.378265645 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:11:48Z, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 03:53:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:53:44 localhost podman[98789]: 2025-12-02 08:53:44.944070486 +0000 UTC m=+0.575821291 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:53:44 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:53:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:53:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:53:47 localhost systemd[1]: tmp-crun.j0tVgQ.mount: Deactivated successfully. Dec 2 03:53:47 localhost podman[98905]: 2025-12-02 08:53:47.456507254 +0000 UTC m=+0.093232591 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com) Dec 2 03:53:47 localhost podman[98905]: 2025-12-02 08:53:47.505181334 +0000 UTC m=+0.141906691 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 03:53:47 localhost podman[98905]: unhealthy Dec 2 03:53:47 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:47 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:53:47 localhost podman[98906]: 2025-12-02 08:53:47.505904344 +0000 UTC m=+0.139659211 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 03:53:47 localhost podman[98906]: 2025-12-02 08:53:47.586101456 +0000 UTC m=+0.219856333 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 2 03:53:47 localhost podman[98906]: unhealthy Dec 2 03:53:47 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:53:47 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:53:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:53:56 localhost systemd[1]: tmp-crun.pOgWau.mount: Deactivated successfully. Dec 2 03:53:56 localhost podman[99021]: 2025-12-02 08:53:56.456892678 +0000 UTC m=+0.098555373 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12) Dec 2 03:53:56 localhost systemd[1]: tmp-crun.pGHy9o.mount: Deactivated successfully. Dec 2 03:53:56 localhost podman[99022]: 2025-12-02 08:53:56.503314198 +0000 UTC m=+0.142992521 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Dec 2 03:53:56 localhost podman[99022]: 2025-12-02 08:53:56.511407944 +0000 UTC m=+0.151086297 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 03:53:56 localhost podman[99021]: 2025-12-02 08:53:56.523089836 +0000 UTC m=+0.164752461 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64) Dec 2 03:53:56 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:53:56 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:54:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:54:06 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:54:06 localhost recover_tripleo_nova_virtqemud[99062]: 62312 Dec 2 03:54:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:54:06 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:54:06 localhost podman[99060]: 2025-12-02 08:54:06.434847363 +0000 UTC m=+0.068783688 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:54:06 localhost podman[99060]: 2025-12-02 08:54:06.611986975 +0000 UTC m=+0.245923310 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 2 03:54:06 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:54:15 localhost podman[99091]: 2025-12-02 08:54:15.470972393 +0000 UTC m=+0.102735296 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:15 localhost podman[99091]: 2025-12-02 08:54:15.507048576 +0000 UTC m=+0.138811429 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:54:15 localhost systemd[1]: tmp-crun.Z7q7DU.mount: Deactivated successfully. Dec 2 03:54:15 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:54:15 localhost podman[99093]: 2025-12-02 08:54:15.533125882 +0000 UTC m=+0.155213386 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, distribution-scope=public, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:54:15 localhost podman[99092]: 2025-12-02 08:54:15.574783805 +0000 UTC m=+0.203282360 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vendor=Red Hat, Inc.) Dec 2 03:54:15 localhost podman[99097]: 2025-12-02 08:54:15.634687356 +0000 UTC m=+0.254783736 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 2 03:54:15 localhost podman[99105]: 2025-12-02 08:54:15.676807141 +0000 UTC m=+0.292855394 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1761123044, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Dec 2 03:54:15 localhost podman[99093]: 2025-12-02 08:54:15.694480443 +0000 UTC m=+0.316567917 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 2 03:54:15 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:54:15 localhost podman[99097]: 2025-12-02 08:54:15.715002721 +0000 UTC m=+0.335099141 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:15 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:54:15 localhost podman[99105]: 2025-12-02 08:54:15.737203354 +0000 UTC m=+0.353251607 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:15 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:54:15 localhost podman[99092]: 2025-12-02 08:54:15.942046695 +0000 UTC m=+0.570545310 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:54:15 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:54:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:54:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:54:18 localhost podman[99206]: 2025-12-02 08:54:18.429734102 +0000 UTC m=+0.073379591 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=) Dec 2 03:54:18 localhost systemd[1]: tmp-crun.eSfgLc.mount: Deactivated successfully. Dec 2 03:54:18 localhost podman[99206]: 2025-12-02 08:54:18.439558795 +0000 UTC m=+0.083204274 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible) Dec 2 03:54:18 localhost podman[99206]: unhealthy Dec 2 03:54:18 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:18 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:54:18 localhost podman[99205]: 2025-12-02 08:54:18.484388512 +0000 UTC m=+0.127601640 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, release=1761123044, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12) Dec 2 03:54:18 localhost podman[99205]: 2025-12-02 08:54:18.498183221 +0000 UTC m=+0.141396309 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:54:18 localhost podman[99205]: unhealthy Dec 2 03:54:18 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:18 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:54:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:54:27 localhost systemd[1]: tmp-crun.Mzkq8r.mount: Deactivated successfully. Dec 2 03:54:27 localhost podman[99246]: 2025-12-02 08:54:27.446635867 +0000 UTC m=+0.091900776 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:54:27 localhost podman[99247]: 2025-12-02 08:54:27.484711794 +0000 UTC m=+0.126724977 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-type=git) Dec 2 03:54:27 localhost podman[99247]: 2025-12-02 08:54:27.491797043 +0000 UTC m=+0.133810256 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 03:54:27 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:54:27 localhost podman[99246]: 2025-12-02 08:54:27.50929521 +0000 UTC m=+0.154560119 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:54:27 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:54:28 localhost systemd[1]: tmp-crun.UaZNKW.mount: Deactivated successfully. Dec 2 03:54:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:54:37 localhost systemd[1]: tmp-crun.fk8HHB.mount: Deactivated successfully. Dec 2 03:54:37 localhost podman[99284]: 2025-12-02 08:54:37.438126334 +0000 UTC m=+0.082782783 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=metrics_qdr) Dec 2 03:54:37 localhost podman[99284]: 2025-12-02 08:54:37.652086979 +0000 UTC m=+0.296743478 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 03:54:37 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:54:46 localhost systemd[1]: tmp-crun.faWkSd.mount: Deactivated successfully. Dec 2 03:54:46 localhost podman[99315]: 2025-12-02 08:54:46.467597754 +0000 UTC m=+0.100361431 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:54:46 localhost podman[99315]: 2025-12-02 08:54:46.548541567 +0000 UTC m=+0.181305244 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 03:54:46 localhost podman[99314]: 2025-12-02 08:54:46.558681717 +0000 UTC m=+0.194485736 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:46 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:54:46 localhost podman[99313]: 2025-12-02 08:54:46.608119018 +0000 UTC m=+0.247366248 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, version=17.1.12, container_name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:54:46 localhost podman[99313]: 2025-12-02 08:54:46.621015242 +0000 UTC m=+0.260262442 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 2 03:54:46 localhost podman[99319]: 2025-12-02 08:54:46.662826019 +0000 UTC m=+0.292196946 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 03:54:46 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:54:46 localhost podman[99319]: 2025-12-02 08:54:46.692099871 +0000 UTC m=+0.321470798 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:54:46 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:54:46 localhost podman[99322]: 2025-12-02 08:54:46.531103571 +0000 UTC m=+0.156178333 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:54:46 localhost podman[99322]: 2025-12-02 08:54:46.762807779 +0000 UTC m=+0.387882581 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:54:46 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:54:46 localhost podman[99314]: 2025-12-02 08:54:46.948317704 +0000 UTC m=+0.584121693 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1) Dec 2 03:54:46 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:54:47 localhost systemd[1]: tmp-crun.9vDpsa.mount: Deactivated successfully. Dec 2 03:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:54:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:54:49 localhost podman[99431]: 2025-12-02 08:54:49.443079721 +0000 UTC m=+0.084326414 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:54:49 localhost podman[99431]: 2025-12-02 08:54:49.487066046 +0000 UTC m=+0.128312699 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 03:54:49 localhost podman[99431]: unhealthy Dec 2 03:54:49 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:49 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:54:49 localhost podman[99432]: 2025-12-02 08:54:49.502091367 +0000 UTC m=+0.139402035 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:34:05Z, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Dec 2 03:54:49 localhost podman[99432]: 2025-12-02 08:54:49.520221382 +0000 UTC m=+0.157532020 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, tcib_managed=true) Dec 2 03:54:49 localhost podman[99432]: unhealthy Dec 2 03:54:49 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:54:49 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:54:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:54:58 localhost podman[99548]: 2025-12-02 08:54:58.470924108 +0000 UTC m=+0.095452621 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:54:58 localhost systemd[1]: tmp-crun.lsRpBX.mount: Deactivated successfully. Dec 2 03:54:58 localhost podman[99547]: 2025-12-02 08:54:58.526578714 +0000 UTC m=+0.150744527 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, container_name=collectd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, version=17.1.12) Dec 2 03:54:58 localhost podman[99548]: 2025-12-02 08:54:58.534019882 +0000 UTC m=+0.158548415 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=) Dec 2 03:54:58 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:54:58 localhost podman[99547]: 2025-12-02 08:54:58.565037321 +0000 UTC m=+0.189203104 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 03:54:58 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:55:05 localhost systemd[1]: session-29.scope: Deactivated successfully. Dec 2 03:55:05 localhost systemd[1]: session-29.scope: Consumed 7min 13.621s CPU time. Dec 2 03:55:05 localhost systemd-logind[757]: Session 29 logged out. Waiting for processes to exit. Dec 2 03:55:05 localhost systemd-logind[757]: Removed session 29. Dec 2 03:55:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:55:08 localhost podman[99588]: 2025-12-02 08:55:08.43981056 +0000 UTC m=+0.083470371 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible) Dec 2 03:55:08 localhost podman[99588]: 2025-12-02 08:55:08.631790228 +0000 UTC m=+0.275450029 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:55:08 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:55:15 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 03:55:15 localhost systemd[35843]: Activating special unit Exit the Session... Dec 2 03:55:15 localhost systemd[35843]: Removed slice User Background Tasks Slice. Dec 2 03:55:15 localhost systemd[35843]: Stopped target Main User Target. Dec 2 03:55:15 localhost systemd[35843]: Stopped target Basic System. Dec 2 03:55:15 localhost systemd[35843]: Stopped target Paths. Dec 2 03:55:15 localhost systemd[35843]: Stopped target Sockets. Dec 2 03:55:15 localhost systemd[35843]: Stopped target Timers. Dec 2 03:55:15 localhost systemd[35843]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 03:55:15 localhost systemd[35843]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 03:55:15 localhost systemd[35843]: Closed D-Bus User Message Bus Socket. Dec 2 03:55:15 localhost systemd[35843]: Stopped Create User's Volatile Files and Directories. Dec 2 03:55:15 localhost systemd[35843]: Removed slice User Application Slice. Dec 2 03:55:15 localhost systemd[35843]: Reached target Shutdown. Dec 2 03:55:15 localhost systemd[35843]: Finished Exit the Session. Dec 2 03:55:15 localhost systemd[35843]: Reached target Exit the Session. Dec 2 03:55:15 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 03:55:15 localhost systemd[1]: user@1003.service: Consumed 4.824s CPU time, read 0B from disk, written 7.0K to disk. Dec 2 03:55:15 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 03:55:15 localhost recover_tripleo_nova_virtqemud[99618]: 62312 Dec 2 03:55:15 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:55:15 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 03:55:15 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 03:55:15 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 03:55:15 localhost systemd[1]: user-1003.slice: Consumed 7min 18.466s CPU time. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:55:17 localhost podman[99635]: 2025-12-02 08:55:17.467246356 +0000 UTC m=+0.089355238 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible) Dec 2 03:55:17 localhost systemd[1]: tmp-crun.Sa8Wem.mount: Deactivated successfully. Dec 2 03:55:17 localhost podman[99622]: 2025-12-02 08:55:17.533864775 +0000 UTC m=+0.161526585 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, version=17.1.12) Dec 2 03:55:17 localhost podman[99627]: 2025-12-02 08:55:17.440598804 +0000 UTC m=+0.069549939 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 2 03:55:17 localhost podman[99635]: 2025-12-02 08:55:17.545000872 +0000 UTC m=+0.167109804 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com) Dec 2 03:55:17 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:55:17 localhost podman[99622]: 2025-12-02 08:55:17.592046019 +0000 UTC m=+0.219707829 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible) Dec 2 03:55:17 localhost podman[99620]: 2025-12-02 08:55:17.553113019 +0000 UTC m=+0.188093535 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, release=1761123044, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 03:55:17 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:55:17 localhost podman[99621]: 2025-12-02 08:55:17.608817677 +0000 UTC m=+0.242349784 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible) Dec 2 03:55:17 localhost podman[99627]: 2025-12-02 08:55:17.62615269 +0000 UTC m=+0.255103845 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:55:17 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:55:17 localhost podman[99620]: 2025-12-02 08:55:17.639890777 +0000 UTC m=+0.274871273 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:55:17 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:55:18 localhost podman[99621]: 2025-12-02 08:55:18.011452252 +0000 UTC m=+0.644984419 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute) Dec 2 03:55:18 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:55:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:55:20 localhost systemd[1]: tmp-crun.Hnqnmi.mount: Deactivated successfully. Dec 2 03:55:20 localhost podman[99739]: 2025-12-02 08:55:20.457724032 +0000 UTC m=+0.095645755 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 03:55:20 localhost podman[99739]: 2025-12-02 08:55:20.471070769 +0000 UTC m=+0.108992502 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com) Dec 2 03:55:20 localhost podman[99739]: unhealthy Dec 2 03:55:20 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:20 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:55:20 localhost podman[99740]: 2025-12-02 08:55:20.436699621 +0000 UTC m=+0.075788225 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 2 03:55:20 localhost podman[99740]: 2025-12-02 08:55:20.520259413 +0000 UTC m=+0.159347997 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 03:55:20 localhost podman[99740]: unhealthy Dec 2 03:55:20 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:20 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:55:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:55:29 localhost podman[99778]: 2025-12-02 08:55:29.436925149 +0000 UTC m=+0.077643094 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible) Dec 2 03:55:29 localhost podman[99778]: 2025-12-02 08:55:29.449934507 +0000 UTC m=+0.090652492 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:55:29 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:55:29 localhost podman[99779]: 2025-12-02 08:55:29.493738748 +0000 UTC m=+0.130002734 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, container_name=iscsid, build-date=2025-11-18T23:44:13Z) Dec 2 03:55:29 localhost podman[99779]: 2025-12-02 08:55:29.506862367 +0000 UTC m=+0.143126273 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 03:55:29 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:55:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:55:39 localhost podman[99816]: 2025-12-02 08:55:39.420375472 +0000 UTC m=+0.066428895 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, version=17.1.12) Dec 2 03:55:39 localhost podman[99816]: 2025-12-02 08:55:39.64118043 +0000 UTC m=+0.287233803 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12) Dec 2 03:55:39 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:55:48 localhost podman[99851]: 2025-12-02 08:55:48.437421801 +0000 UTC m=+0.069744244 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true) Dec 2 03:55:48 localhost podman[99846]: 2025-12-02 08:55:48.456569483 +0000 UTC m=+0.091361712 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-type=git, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 2 03:55:48 localhost podman[99851]: 2025-12-02 08:55:48.492140173 +0000 UTC m=+0.124462676 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git) Dec 2 03:55:48 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:55:48 localhost podman[99855]: 2025-12-02 08:55:48.513795091 +0000 UTC m=+0.140648388 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, release=1761123044, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 2 03:55:48 localhost podman[99845]: 2025-12-02 08:55:48.564460134 +0000 UTC m=+0.203802064 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:55:48 localhost podman[99855]: 2025-12-02 08:55:48.569119229 +0000 UTC m=+0.195972596 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4) Dec 2 03:55:48 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:55:48 localhost podman[99847]: 2025-12-02 08:55:48.61557755 +0000 UTC m=+0.248366295 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044) Dec 2 03:55:48 localhost podman[99845]: 2025-12-02 08:55:48.645682494 +0000 UTC m=+0.285024504 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, url=https://www.redhat.com) Dec 2 03:55:48 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:55:48 localhost podman[99847]: 2025-12-02 08:55:48.665853533 +0000 UTC m=+0.298642238 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:55:48 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:55:48 localhost podman[99846]: 2025-12-02 08:55:48.844932115 +0000 UTC m=+0.479724284 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 2 03:55:48 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:55:51 localhost systemd[1]: tmp-crun.kL5d4g.mount: Deactivated successfully. Dec 2 03:55:51 localhost podman[99964]: 2025-12-02 08:55:51.449253168 +0000 UTC m=+0.086409949 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, release=1761123044, container_name=ovn_controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:55:51 localhost podman[99964]: 2025-12-02 08:55:51.462104912 +0000 UTC m=+0.099261723 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:55:51 localhost podman[99964]: unhealthy Dec 2 03:55:51 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:51 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:55:51 localhost podman[99963]: 2025-12-02 08:55:51.551849489 +0000 UTC m=+0.189917544 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:55:51 localhost podman[99963]: 2025-12-02 08:55:51.591042115 +0000 UTC m=+0.229110130 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible) Dec 2 03:55:51 localhost podman[99963]: unhealthy Dec 2 03:55:51 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:55:51 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:56:00 localhost podman[100080]: 2025-12-02 08:56:00.446802894 +0000 UTC m=+0.079831464 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 2 03:56:00 localhost podman[100080]: 2025-12-02 08:56:00.460967182 +0000 UTC m=+0.093995752 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044) Dec 2 03:56:00 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:56:00 localhost systemd[1]: tmp-crun.x5EoOI.mount: Deactivated successfully. Dec 2 03:56:00 localhost podman[100079]: 2025-12-02 08:56:00.558693443 +0000 UTC m=+0.193007727 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=collectd, io.openshift.expose-services=) Dec 2 03:56:00 localhost podman[100079]: 2025-12-02 08:56:00.574963188 +0000 UTC m=+0.209277472 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:56:00 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:56:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:56:10 localhost podman[100115]: 2025-12-02 08:56:10.440198261 +0000 UTC m=+0.083961694 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:56:10 localhost podman[100115]: 2025-12-02 08:56:10.664169913 +0000 UTC m=+0.307933416 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Dec 2 03:56:10 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:56:19 localhost podman[100164]: 2025-12-02 08:56:19.459196351 +0000 UTC m=+0.080290906 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:56:19 localhost systemd[1]: tmp-crun.grzU4W.mount: Deactivated successfully. Dec 2 03:56:19 localhost podman[100148]: 2025-12-02 08:56:19.506065843 +0000 UTC m=+0.137065672 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:56:19 localhost podman[100164]: 2025-12-02 08:56:19.512959057 +0000 UTC m=+0.134053612 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible) Dec 2 03:56:19 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:56:19 localhost podman[100148]: 2025-12-02 08:56:19.558955025 +0000 UTC m=+0.189954834 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1761123044, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute) Dec 2 03:56:19 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:56:19 localhost podman[100147]: 2025-12-02 08:56:19.613080422 +0000 UTC m=+0.247978575 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true) Dec 2 03:56:19 localhost podman[100150]: 2025-12-02 08:56:19.562986714 +0000 UTC m=+0.189047811 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 2 03:56:19 localhost podman[100146]: 2025-12-02 08:56:19.68454998 +0000 UTC m=+0.319218777 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, name=rhosp17/openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Dec 2 03:56:19 localhost podman[100146]: 2025-12-02 08:56:19.694017104 +0000 UTC m=+0.328685851 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:56:19 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:56:19 localhost podman[100150]: 2025-12-02 08:56:19.749257938 +0000 UTC m=+0.375319035 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute) Dec 2 03:56:19 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:56:20 localhost podman[100147]: 2025-12-02 08:56:20.003755666 +0000 UTC m=+0.638653849 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 2 03:56:20 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:56:22 localhost systemd[1]: tmp-crun.e6xRUQ.mount: Deactivated successfully. Dec 2 03:56:22 localhost podman[100265]: 2025-12-02 08:56:22.460489297 +0000 UTC m=+0.101295147 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z) Dec 2 03:56:22 localhost podman[100265]: 2025-12-02 08:56:22.473086424 +0000 UTC m=+0.113892324 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4) Dec 2 03:56:22 localhost podman[100265]: unhealthy Dec 2 03:56:22 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:22 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:56:22 localhost podman[100266]: 2025-12-02 08:56:22.434311708 +0000 UTC m=+0.075731754 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller) Dec 2 03:56:22 localhost podman[100266]: 2025-12-02 08:56:22.520235772 +0000 UTC m=+0.161655738 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:56:22 localhost podman[100266]: unhealthy Dec 2 03:56:22 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:22 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:56:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:56:31 localhost systemd[1]: tmp-crun.Bsza2p.mount: Deactivated successfully. Dec 2 03:56:31 localhost podman[100301]: 2025-12-02 08:56:31.446069657 +0000 UTC m=+0.081376364 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:56:31 localhost podman[100301]: 2025-12-02 08:56:31.484061251 +0000 UTC m=+0.119367978 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 2 03:56:31 localhost podman[100302]: 2025-12-02 08:56:31.495044705 +0000 UTC m=+0.126289684 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid) Dec 2 03:56:31 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:56:31 localhost podman[100302]: 2025-12-02 08:56:31.506956143 +0000 UTC m=+0.138201062 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 2 03:56:31 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:56:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:56:41 localhost podman[100340]: 2025-12-02 08:56:41.457246409 +0000 UTC m=+0.096931759 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-type=git) Dec 2 03:56:41 localhost podman[100340]: 2025-12-02 08:56:41.68715773 +0000 UTC m=+0.326843100 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, config_id=tripleo_step1, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:56:41 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:56:46 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:56:46 localhost recover_tripleo_nova_virtqemud[100370]: 62312 Dec 2 03:56:46 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:56:46 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:56:50 localhost podman[100373]: 2025-12-02 08:56:50.438604444 +0000 UTC m=+0.074440579 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:56:50 localhost systemd[1]: tmp-crun.po652o.mount: Deactivated successfully. Dec 2 03:56:50 localhost podman[100373]: 2025-12-02 08:56:50.501877584 +0000 UTC m=+0.137713659 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, config_id=tripleo_step5, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, architecture=x86_64) Dec 2 03:56:50 localhost podman[100371]: 2025-12-02 08:56:50.501284179 +0000 UTC m=+0.140980078 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com) Dec 2 03:56:50 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:56:50 localhost podman[100390]: 2025-12-02 08:56:50.556181034 +0000 UTC m=+0.181243151 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:56:50 localhost podman[100390]: 2025-12-02 08:56:50.615066087 +0000 UTC m=+0.240128194 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:56:50 localhost podman[100379]: 2025-12-02 08:56:50.613896926 +0000 UTC m=+0.243498055 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 03:56:50 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:56:50 localhost podman[100372]: 2025-12-02 08:56:50.66532479 +0000 UTC m=+0.302329997 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:56:50 localhost podman[100371]: 2025-12-02 08:56:50.686819354 +0000 UTC m=+0.326515303 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:56:50 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:56:50 localhost podman[100379]: 2025-12-02 08:56:50.700315525 +0000 UTC m=+0.329916724 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Dec 2 03:56:50 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:56:51 localhost podman[100372]: 2025-12-02 08:56:51.074051137 +0000 UTC m=+0.711056284 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, release=1761123044, maintainer=OpenStack TripleO Team) Dec 2 03:56:51 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:56:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:56:53 localhost systemd[1]: tmp-crun.IXkMds.mount: Deactivated successfully. Dec 2 03:56:53 localhost podman[100492]: 2025-12-02 08:56:53.435818661 +0000 UTC m=+0.079928526 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 03:56:53 localhost podman[100492]: 2025-12-02 08:56:53.45525511 +0000 UTC m=+0.099365005 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:56:53 localhost podman[100492]: unhealthy Dec 2 03:56:53 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:53 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:56:53 localhost systemd[1]: tmp-crun.wl4bjJ.mount: Deactivated successfully. Dec 2 03:56:53 localhost podman[100493]: 2025-12-02 08:56:53.551736657 +0000 UTC m=+0.188841545 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4) Dec 2 03:56:53 localhost podman[100493]: 2025-12-02 08:56:53.562711881 +0000 UTC m=+0.199816789 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:56:53 localhost podman[100493]: unhealthy Dec 2 03:56:53 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:56:53 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:57:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:57:02 localhost systemd[1]: tmp-crun.rvnIvj.mount: Deactivated successfully. Dec 2 03:57:02 localhost podman[100610]: 2025-12-02 08:57:02.432004092 +0000 UTC m=+0.076777752 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 2 03:57:02 localhost podman[100609]: 2025-12-02 08:57:02.445397049 +0000 UTC m=+0.089433490 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:57:02 localhost podman[100609]: 2025-12-02 08:57:02.458193521 +0000 UTC m=+0.102229942 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:57:02 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:57:02 localhost podman[100610]: 2025-12-02 08:57:02.472925705 +0000 UTC m=+0.117699385 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4) Dec 2 03:57:02 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:57:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:57:12 localhost podman[100644]: 2025-12-02 08:57:12.443666985 +0000 UTC m=+0.082554945 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4) Dec 2 03:57:12 localhost podman[100644]: 2025-12-02 08:57:12.647978512 +0000 UTC m=+0.286866402 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:57:12 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 03:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:57:21 localhost podman[100672]: 2025-12-02 08:57:21.446801333 +0000 UTC m=+0.085214347 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:57:21 localhost podman[100672]: 2025-12-02 08:57:21.455979539 +0000 UTC m=+0.094392573 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, architecture=x86_64, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true) Dec 2 03:57:21 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:57:21 localhost podman[100686]: 2025-12-02 08:57:21.494573349 +0000 UTC m=+0.120722305 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:57:21 localhost podman[100686]: 2025-12-02 08:57:21.520123671 +0000 UTC m=+0.146272667 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=) Dec 2 03:57:21 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:57:21 localhost podman[100673]: 2025-12-02 08:57:21.60502435 +0000 UTC m=+0.237935327 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4) Dec 2 03:57:21 localhost podman[100674]: 2025-12-02 08:57:21.65332338 +0000 UTC m=+0.284256723 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:57:21 localhost podman[100674]: 2025-12-02 08:57:21.708920495 +0000 UTC m=+0.339853818 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:57:21 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:57:21 localhost podman[100680]: 2025-12-02 08:57:21.720795742 +0000 UTC m=+0.346654871 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true) Dec 2 03:57:21 localhost podman[100680]: 2025-12-02 08:57:21.742886892 +0000 UTC m=+0.368746041 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, release=1761123044, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, tcib_managed=true) Dec 2 03:57:21 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:57:21 localhost podman[100673]: 2025-12-02 08:57:21.950076296 +0000 UTC m=+0.582987303 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, architecture=x86_64) Dec 2 03:57:21 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:57:24 localhost podman[100793]: 2025-12-02 08:57:24.417303637 +0000 UTC m=+0.063410906 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:57:24 localhost podman[100794]: 2025-12-02 08:57:24.486136724 +0000 UTC m=+0.127543007 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 03:57:24 localhost podman[100794]: 2025-12-02 08:57:24.501415953 +0000 UTC m=+0.142822236 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git) Dec 2 03:57:24 localhost podman[100794]: unhealthy Dec 2 03:57:24 localhost podman[100793]: 2025-12-02 08:57:24.510590758 +0000 UTC m=+0.156698027 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 2 03:57:24 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:24 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:57:24 localhost podman[100793]: unhealthy Dec 2 03:57:24 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:24 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:57:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:57:33 localhost systemd[1]: tmp-crun.r607IL.mount: Deactivated successfully. Dec 2 03:57:33 localhost podman[100832]: 2025-12-02 08:57:33.440234462 +0000 UTC m=+0.080963503 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 2 03:57:33 localhost podman[100832]: 2025-12-02 08:57:33.448990396 +0000 UTC m=+0.089719457 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, architecture=x86_64, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044) Dec 2 03:57:33 localhost podman[100833]: 2025-12-02 08:57:33.461333365 +0000 UTC m=+0.094465054 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, container_name=iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 03:57:33 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:57:33 localhost podman[100833]: 2025-12-02 08:57:33.468851637 +0000 UTC m=+0.101983346 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, batch=17.1_20251118.1) Dec 2 03:57:33 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:57:34 localhost systemd[1]: tmp-crun.ahez0k.mount: Deactivated successfully. Dec 2 03:57:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:57:43 localhost systemd[1]: tmp-crun.I9ytkW.mount: Deactivated successfully. Dec 2 03:57:43 localhost podman[100869]: 2025-12-02 08:57:43.436770323 +0000 UTC m=+0.081688433 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1761123044, batch=17.1_20251118.1) Dec 2 03:57:43 localhost podman[100869]: 2025-12-02 08:57:43.633758805 +0000 UTC m=+0.278676975 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, tcib_managed=true) Dec 2 03:57:43 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:57:52 localhost podman[100898]: 2025-12-02 08:57:52.45747888 +0000 UTC m=+0.093553239 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.41.4, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public) Dec 2 03:57:52 localhost podman[100898]: 2025-12-02 08:57:52.469965783 +0000 UTC m=+0.106040152 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 2 03:57:52 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:57:52 localhost systemd[1]: tmp-crun.64npVu.mount: Deactivated successfully. Dec 2 03:57:52 localhost podman[100901]: 2025-12-02 08:57:52.551893732 +0000 UTC m=+0.178426747 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:57:52 localhost podman[100899]: 2025-12-02 08:57:52.573223412 +0000 UTC m=+0.204445312 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 2 03:57:52 localhost podman[100901]: 2025-12-02 08:57:52.582019496 +0000 UTC m=+0.208552481 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Dec 2 03:57:52 localhost podman[100907]: 2025-12-02 08:57:52.616251421 +0000 UTC m=+0.239352544 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 03:57:52 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:57:52 localhost podman[100907]: 2025-12-02 08:57:52.650136516 +0000 UTC m=+0.273237699 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:57:52 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:57:52 localhost podman[100900]: 2025-12-02 08:57:52.721865722 +0000 UTC m=+0.351563361 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 2 03:57:52 localhost podman[100900]: 2025-12-02 08:57:52.776154602 +0000 UTC m=+0.405852251 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 2 03:57:52 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:57:52 localhost podman[100899]: 2025-12-02 08:57:52.912925856 +0000 UTC m=+0.544147726 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:57:52 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:57:53 localhost systemd[1]: tmp-crun.DH5JAU.mount: Deactivated successfully. Dec 2 03:57:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:57:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:57:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:57:55 localhost recover_tripleo_nova_virtqemud[101048]: 62312 Dec 2 03:57:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:57:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:57:55 localhost systemd[1]: tmp-crun.F4u1rr.mount: Deactivated successfully. Dec 2 03:57:55 localhost podman[101035]: 2025-12-02 08:57:55.320840741 +0000 UTC m=+0.083815320 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:57:55 localhost podman[101035]: 2025-12-02 08:57:55.357061089 +0000 UTC m=+0.120035718 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, release=1761123044, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:57:55 localhost systemd[1]: tmp-crun.JoyMvF.mount: Deactivated successfully. Dec 2 03:57:55 localhost podman[101035]: unhealthy Dec 2 03:57:55 localhost podman[101034]: 2025-12-02 08:57:55.377828433 +0000 UTC m=+0.141325155 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step4) Dec 2 03:57:55 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:55 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:57:55 localhost podman[101034]: 2025-12-02 08:57:55.421985293 +0000 UTC m=+0.185482045 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64) Dec 2 03:57:55 localhost podman[101034]: unhealthy Dec 2 03:57:55 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:57:55 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:58:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:58:04 localhost systemd[1]: tmp-crun.Yw8Qr1.mount: Deactivated successfully. Dec 2 03:58:04 localhost podman[101137]: 2025-12-02 08:58:04.43934414 +0000 UTC m=+0.077017808 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1761123044, container_name=collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Dec 2 03:58:04 localhost podman[101137]: 2025-12-02 08:58:04.44906326 +0000 UTC m=+0.086736938 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-collectd) Dec 2 03:58:04 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:58:04 localhost podman[101138]: 2025-12-02 08:58:04.488047281 +0000 UTC m=+0.122299328 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z) Dec 2 03:58:04 localhost podman[101138]: 2025-12-02 08:58:04.497102783 +0000 UTC m=+0.131354830 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 2 03:58:04 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:58:05 localhost systemd[1]: tmp-crun.w3stW2.mount: Deactivated successfully. Dec 2 03:58:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:58:14 localhost podman[101174]: 2025-12-02 08:58:14.445017366 +0000 UTC m=+0.085773242 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:58:14 localhost podman[101174]: 2025-12-02 08:58:14.658277232 +0000 UTC m=+0.299033188 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible) Dec 2 03:58:14 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:58:23 localhost podman[101211]: 2025-12-02 08:58:23.464887631 +0000 UTC m=+0.088383702 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi) Dec 2 03:58:23 localhost systemd[1]: tmp-crun.N6KBqY.mount: Deactivated successfully. Dec 2 03:58:23 localhost podman[101206]: 2025-12-02 08:58:23.517782354 +0000 UTC m=+0.145033735 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:58:23 localhost podman[101211]: 2025-12-02 08:58:23.522000136 +0000 UTC m=+0.145496217 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:58:23 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:58:23 localhost podman[101206]: 2025-12-02 08:58:23.572150546 +0000 UTC m=+0.199401937 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 2 03:58:23 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:58:23 localhost podman[101205]: 2025-12-02 08:58:23.614733173 +0000 UTC m=+0.245183340 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:23 localhost podman[101204]: 2025-12-02 08:58:23.575898925 +0000 UTC m=+0.208707625 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 2 03:58:23 localhost podman[101205]: 2025-12-02 08:58:23.644021616 +0000 UTC m=+0.274471783 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 2 03:58:23 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:58:23 localhost podman[101203]: 2025-12-02 08:58:23.724176706 +0000 UTC m=+0.357963603 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:58:23 localhost podman[101203]: 2025-12-02 08:58:23.733967528 +0000 UTC m=+0.367754495 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:58:23 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:58:23 localhost podman[101204]: 2025-12-02 08:58:23.908058308 +0000 UTC m=+0.540867018 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.12, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true) Dec 2 03:58:23 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:58:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:58:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:58:26 localhost systemd[1]: tmp-crun.EqB5Rt.mount: Deactivated successfully. Dec 2 03:58:26 localhost podman[101325]: 2025-12-02 08:58:26.443074199 +0000 UTC m=+0.084734815 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 2 03:58:26 localhost podman[101325]: 2025-12-02 08:58:26.460358771 +0000 UTC m=+0.102019357 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 2 03:58:26 localhost podman[101325]: unhealthy Dec 2 03:58:26 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:26 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:58:26 localhost podman[101326]: 2025-12-02 08:58:26.559329684 +0000 UTC m=+0.193789607 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Dec 2 03:58:26 localhost podman[101326]: 2025-12-02 08:58:26.600951466 +0000 UTC m=+0.235411389 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 03:58:26 localhost podman[101326]: unhealthy Dec 2 03:58:26 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:26 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:58:27 localhost systemd[1]: tmp-crun.jnBu83.mount: Deactivated successfully. Dec 2 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:58:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:58:35 localhost podman[101365]: 2025-12-02 08:58:35.447583792 +0000 UTC m=+0.088643129 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:35 localhost podman[101365]: 2025-12-02 08:58:35.455670879 +0000 UTC m=+0.096730206 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, container_name=collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:58:35 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:58:35 localhost systemd[1]: tmp-crun.hq6IP0.mount: Deactivated successfully. Dec 2 03:58:35 localhost podman[101366]: 2025-12-02 08:58:35.55903759 +0000 UTC m=+0.195968306 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, container_name=iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 03:58:35 localhost podman[101366]: 2025-12-02 08:58:35.568851261 +0000 UTC m=+0.205781947 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 03:58:35 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:58:45 localhost podman[101402]: 2025-12-02 08:58:45.442331906 +0000 UTC m=+0.086499392 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible) Dec 2 03:58:45 localhost podman[101402]: 2025-12-02 08:58:45.675883504 +0000 UTC m=+0.320051020 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:58:45 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:58:54 localhost systemd[1]: tmp-crun.tx66Xy.mount: Deactivated successfully. Dec 2 03:58:54 localhost systemd[1]: tmp-crun.BDXNw7.mount: Deactivated successfully. Dec 2 03:58:54 localhost podman[101433]: 2025-12-02 08:58:54.472909106 +0000 UTC m=+0.099868008 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=) Dec 2 03:58:54 localhost podman[101432]: 2025-12-02 08:58:54.483310754 +0000 UTC m=+0.119846992 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 03:58:54 localhost podman[101431]: 2025-12-02 08:58:54.441914108 +0000 UTC m=+0.081425256 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 2 03:58:54 localhost podman[101441]: 2025-12-02 08:58:54.569754393 +0000 UTC m=+0.195232246 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:58:54 localhost podman[101431]: 2025-12-02 08:58:54.576341009 +0000 UTC m=+0.215852197 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:58:54 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:58:54 localhost podman[101445]: 2025-12-02 08:58:54.621198067 +0000 UTC m=+0.240136185 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4) Dec 2 03:58:54 localhost podman[101441]: 2025-12-02 08:58:54.630060654 +0000 UTC m=+0.255538537 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute) Dec 2 03:58:54 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:58:54 localhost podman[101433]: 2025-12-02 08:58:54.645395843 +0000 UTC m=+0.272354725 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:58:54 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:58:54 localhost podman[101445]: 2025-12-02 08:58:54.676850553 +0000 UTC m=+0.295788651 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:58:54 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:58:54 localhost podman[101432]: 2025-12-02 08:58:54.837169145 +0000 UTC m=+0.473705473 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:58:54 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:58:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:58:57 localhost podman[101549]: 2025-12-02 08:58:57.459405597 +0000 UTC m=+0.092279316 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:58:57 localhost podman[101549]: 2025-12-02 08:58:57.501048858 +0000 UTC m=+0.133922597 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12) Dec 2 03:58:57 localhost podman[101549]: unhealthy Dec 2 03:58:57 localhost podman[101548]: 2025-12-02 08:58:57.518021081 +0000 UTC m=+0.153897010 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 03:58:57 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:57 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:58:57 localhost podman[101548]: 2025-12-02 08:58:57.540143062 +0000 UTC m=+0.176019021 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 2 03:58:57 localhost podman[101548]: unhealthy Dec 2 03:58:57 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:58:57 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:59:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:59:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:59:06 localhost podman[101715]: 2025-12-02 08:59:06.434724999 +0000 UTC m=+0.066268740 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 2 03:59:06 localhost podman[101715]: 2025-12-02 08:59:06.44672056 +0000 UTC m=+0.078264271 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 03:59:06 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:59:06 localhost systemd[1]: tmp-crun.0gDZlO.mount: Deactivated successfully. Dec 2 03:59:06 localhost podman[101716]: 2025-12-02 08:59:06.500192418 +0000 UTC m=+0.132368306 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3) Dec 2 03:59:06 localhost podman[101716]: 2025-12-02 08:59:06.53809937 +0000 UTC m=+0.170275258 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:06 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:59:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:59:16 localhost podman[101754]: 2025-12-02 08:59:16.450900135 +0000 UTC m=+0.091599638 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public) Dec 2 03:59:16 localhost podman[101754]: 2025-12-02 08:59:16.678037482 +0000 UTC m=+0.318736955 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 03:59:16 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:59:25 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 03:59:25 localhost recover_tripleo_nova_virtqemud[101815]: 62312 Dec 2 03:59:25 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 03:59:25 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 03:59:25 localhost systemd[1]: tmp-crun.v7amKo.mount: Deactivated successfully. Dec 2 03:59:25 localhost podman[101786]: 2025-12-02 08:59:25.45772013 +0000 UTC m=+0.091616168 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:25 localhost podman[101786]: 2025-12-02 08:59:25.482237616 +0000 UTC m=+0.116133654 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1) Dec 2 03:59:25 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:59:25 localhost podman[101785]: 2025-12-02 08:59:25.532534529 +0000 UTC m=+0.169351905 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 03:59:25 localhost podman[101784]: 2025-12-02 08:59:25.436576455 +0000 UTC m=+0.079293548 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 03:59:25 localhost podman[101787]: 2025-12-02 08:59:25.487782794 +0000 UTC m=+0.121761254 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc.) Dec 2 03:59:25 localhost podman[101784]: 2025-12-02 08:59:25.571083598 +0000 UTC m=+0.213800731 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4) Dec 2 03:59:25 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:59:25 localhost podman[101787]: 2025-12-02 08:59:25.620875728 +0000 UTC m=+0.254854198 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044) Dec 2 03:59:25 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:59:25 localhost podman[101793]: 2025-12-02 08:59:25.696737754 +0000 UTC m=+0.330845648 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 03:59:25 localhost podman[101793]: 2025-12-02 08:59:25.742937739 +0000 UTC m=+0.377045623 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, distribution-scope=public, release=1761123044, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 03:59:25 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:59:25 localhost podman[101785]: 2025-12-02 08:59:25.871435441 +0000 UTC m=+0.508252837 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12) Dec 2 03:59:25 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:59:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:59:28 localhost podman[101908]: 2025-12-02 08:59:28.431722007 +0000 UTC m=+0.075891528 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git) Dec 2 03:59:28 localhost podman[101908]: 2025-12-02 08:59:28.445974148 +0000 UTC m=+0.090143619 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z) Dec 2 03:59:28 localhost podman[101908]: unhealthy Dec 2 03:59:28 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:28 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:59:28 localhost systemd[1]: tmp-crun.Kcmg3r.mount: Deactivated successfully. Dec 2 03:59:28 localhost podman[101909]: 2025-12-02 08:59:28.555530724 +0000 UTC m=+0.196833678 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, vendor=Red Hat, Inc., container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 03:59:28 localhost podman[101909]: 2025-12-02 08:59:28.575077036 +0000 UTC m=+0.216380010 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true) Dec 2 03:59:28 localhost podman[101909]: unhealthy Dec 2 03:59:28 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:28 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 03:59:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 03:59:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 03:59:37 localhost podman[101946]: 2025-12-02 08:59:37.451527678 +0000 UTC m=+0.089876662 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Dec 2 03:59:37 localhost podman[101947]: 2025-12-02 08:59:37.510537074 +0000 UTC m=+0.143417102 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 03:59:37 localhost podman[101946]: 2025-12-02 08:59:37.534215017 +0000 UTC m=+0.172563981 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible) Dec 2 03:59:37 localhost podman[101947]: 2025-12-02 08:59:37.544135321 +0000 UTC m=+0.177015329 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid) Dec 2 03:59:37 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 03:59:37 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 03:59:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 03:59:47 localhost systemd[1]: tmp-crun.78sWRn.mount: Deactivated successfully. Dec 2 03:59:47 localhost podman[101987]: 2025-12-02 08:59:47.438684099 +0000 UTC m=+0.083133071 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Dec 2 03:59:47 localhost podman[101987]: 2025-12-02 08:59:47.652161191 +0000 UTC m=+0.296610203 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 03:59:47 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 03:59:56 localhost systemd[1]: tmp-crun.z2tkb6.mount: Deactivated successfully. Dec 2 03:59:56 localhost podman[102016]: 2025-12-02 08:59:56.466820335 +0000 UTC m=+0.104344478 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, vcs-type=git, name=rhosp17/openstack-cron, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public) Dec 2 03:59:56 localhost podman[102016]: 2025-12-02 08:59:56.47601376 +0000 UTC m=+0.113537923 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 03:59:56 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 03:59:56 localhost podman[102025]: 2025-12-02 08:59:56.524700561 +0000 UTC m=+0.149694870 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Dec 2 03:59:56 localhost podman[102018]: 2025-12-02 08:59:56.572686253 +0000 UTC m=+0.203193379 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 03:59:56 localhost podman[102025]: 2025-12-02 08:59:56.60328145 +0000 UTC m=+0.228275759 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 03:59:56 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 03:59:56 localhost podman[102017]: 2025-12-02 08:59:56.619550285 +0000 UTC m=+0.253367579 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 03:59:56 localhost podman[102019]: 2025-12-02 08:59:56.681692975 +0000 UTC m=+0.308195023 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 03:59:56 localhost podman[102018]: 2025-12-02 08:59:56.698598846 +0000 UTC m=+0.329105962 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, distribution-scope=public, config_id=tripleo_step5, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 03:59:56 localhost podman[102019]: 2025-12-02 08:59:56.706933979 +0000 UTC m=+0.333436057 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 2 03:59:56 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 03:59:56 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 03:59:57 localhost podman[102017]: 2025-12-02 08:59:57.016326582 +0000 UTC m=+0.650143886 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 2 03:59:57 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 03:59:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 03:59:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 03:59:59 localhost podman[102138]: 2025-12-02 08:59:59.425830861 +0000 UTC m=+0.071799979 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible) Dec 2 03:59:59 localhost systemd[1]: tmp-crun.lVQovS.mount: Deactivated successfully. Dec 2 03:59:59 localhost podman[102139]: 2025-12-02 08:59:59.488523476 +0000 UTC m=+0.128014971 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 2 03:59:59 localhost podman[102138]: 2025-12-02 08:59:59.513908894 +0000 UTC m=+0.159878072 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 03:59:59 localhost podman[102138]: unhealthy Dec 2 03:59:59 localhost podman[102139]: 2025-12-02 08:59:59.525114193 +0000 UTC m=+0.164605658 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64) Dec 2 03:59:59 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:59 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 03:59:59 localhost podman[102139]: unhealthy Dec 2 03:59:59 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 03:59:59 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:00:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:00:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:00:08 localhost podman[102260]: 2025-12-02 09:00:08.47057145 +0000 UTC m=+0.106458045 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:00:08 localhost podman[102261]: 2025-12-02 09:00:08.433827018 +0000 UTC m=+0.072425615 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=iscsid) Dec 2 04:00:08 localhost podman[102261]: 2025-12-02 09:00:08.51251209 +0000 UTC m=+0.151110627 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Dec 2 04:00:08 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:00:08 localhost podman[102260]: 2025-12-02 09:00:08.536097341 +0000 UTC m=+0.171983946 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 2 04:00:08 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:00:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:00:18 localhost podman[102300]: 2025-12-02 09:00:18.440384148 +0000 UTC m=+0.084645962 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:00:18 localhost podman[102300]: 2025-12-02 09:00:18.641372617 +0000 UTC m=+0.285634451 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:00:18 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:00:27 localhost systemd[1]: tmp-crun.6V3C6q.mount: Deactivated successfully. Dec 2 04:00:27 localhost podman[102333]: 2025-12-02 09:00:27.51558692 +0000 UTC m=+0.150012548 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 04:00:27 localhost podman[102332]: 2025-12-02 09:00:27.51635988 +0000 UTC m=+0.153101900 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, release=1761123044, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:00:27 localhost podman[102337]: 2025-12-02 09:00:27.470464734 +0000 UTC m=+0.094440393 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4) Dec 2 04:00:27 localhost podman[102337]: 2025-12-02 09:00:27.555840455 +0000 UTC m=+0.179816084 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:00:27 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:00:27 localhost podman[102332]: 2025-12-02 09:00:27.601087443 +0000 UTC m=+0.237829383 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:00:27 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:00:27 localhost podman[102334]: 2025-12-02 09:00:27.60582241 +0000 UTC m=+0.241241564 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 04:00:27 localhost podman[102334]: 2025-12-02 09:00:27.689362262 +0000 UTC m=+0.324781426 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:00:27 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:00:27 localhost podman[102351]: 2025-12-02 09:00:27.557868699 +0000 UTC m=+0.181527040 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12) Dec 2 04:00:27 localhost podman[102351]: 2025-12-02 09:00:27.743061656 +0000 UTC m=+0.366720037 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:00:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:00:27 localhost podman[102333]: 2025-12-02 09:00:27.915980124 +0000 UTC m=+0.550405722 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4) Dec 2 04:00:27 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:00:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:00:30 localhost podman[102452]: 2025-12-02 09:00:30.44854432 +0000 UTC m=+0.087692614 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:00:30 localhost systemd[1]: tmp-crun.hVeYl5.mount: Deactivated successfully. Dec 2 04:00:30 localhost podman[102453]: 2025-12-02 09:00:30.496704917 +0000 UTC m=+0.133903998 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:00:30 localhost podman[102452]: 2025-12-02 09:00:30.517239295 +0000 UTC m=+0.156387559 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 04:00:30 localhost podman[102452]: unhealthy Dec 2 04:00:30 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:30 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:00:30 localhost podman[102453]: 2025-12-02 09:00:30.535864283 +0000 UTC m=+0.173063334 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:00:30 localhost podman[102453]: unhealthy Dec 2 04:00:30 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:00:30 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:00:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:00:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:00:39 localhost systemd[1]: tmp-crun.okNTqY.mount: Deactivated successfully. Dec 2 04:00:39 localhost podman[102494]: 2025-12-02 09:00:39.437995159 +0000 UTC m=+0.083856760 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 2 04:00:39 localhost podman[102495]: 2025-12-02 09:00:39.446426184 +0000 UTC m=+0.087335003 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, config_id=tripleo_step3, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:00:39 localhost podman[102495]: 2025-12-02 09:00:39.459063732 +0000 UTC m=+0.099972551 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 04:00:39 localhost podman[102494]: 2025-12-02 09:00:39.471601377 +0000 UTC m=+0.117462978 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Dec 2 04:00:39 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:00:39 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:00:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:00:49 localhost podman[102532]: 2025-12-02 09:00:49.441560198 +0000 UTC m=+0.085665289 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:00:49 localhost podman[102532]: 2025-12-02 09:00:49.658988946 +0000 UTC m=+0.303094057 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044) Dec 2 04:00:49 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:00:58 localhost podman[102569]: 2025-12-02 09:00:58.46665954 +0000 UTC m=+0.094415443 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:00:58 localhost podman[102563]: 2025-12-02 09:00:58.514774205 +0000 UTC m=+0.153415068 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond) Dec 2 04:00:58 localhost podman[102569]: 2025-12-02 09:00:58.525154853 +0000 UTC m=+0.152910766 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, release=1761123044, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:00:58 localhost podman[102564]: 2025-12-02 09:00:58.557860437 +0000 UTC m=+0.192877473 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4) Dec 2 04:00:58 localhost podman[102574]: 2025-12-02 09:00:58.588501585 +0000 UTC m=+0.213962456 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:00:58 localhost podman[102563]: 2025-12-02 09:00:58.597094865 +0000 UTC m=+0.235735688 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container) Dec 2 04:00:58 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:00:58 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:00:58 localhost podman[102565]: 2025-12-02 09:00:58.657011145 +0000 UTC m=+0.288492537 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step5, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4) Dec 2 04:00:58 localhost podman[102565]: 2025-12-02 09:00:58.683086081 +0000 UTC m=+0.314567493 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:00:58 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:00:58 localhost podman[102574]: 2025-12-02 09:00:58.735168753 +0000 UTC m=+0.360629584 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 04:00:58 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:00:58 localhost podman[102564]: 2025-12-02 09:00:58.920059621 +0000 UTC m=+0.555076667 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:00:58 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:00:59 localhost systemd[1]: tmp-crun.zV8sQW.mount: Deactivated successfully. Dec 2 04:01:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:01:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:01:01 localhost systemd[1]: tmp-crun.U9W07d.mount: Deactivated successfully. Dec 2 04:01:01 localhost podman[102707]: 2025-12-02 09:01:01.440336799 +0000 UTC m=+0.071166073 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git) Dec 2 04:01:01 localhost podman[102706]: 2025-12-02 09:01:01.459976733 +0000 UTC m=+0.089987225 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:01:01 localhost podman[102707]: 2025-12-02 09:01:01.488244398 +0000 UTC m=+0.119073702 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:01:01 localhost podman[102707]: unhealthy Dec 2 04:01:01 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:01 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:01:01 localhost podman[102706]: 2025-12-02 09:01:01.502789287 +0000 UTC m=+0.132799769 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:01:01 localhost podman[102706]: unhealthy Dec 2 04:01:01 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:01 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:01:02 localhost systemd[1]: tmp-crun.49FuzX.mount: Deactivated successfully. Dec 2 04:01:04 localhost podman[102848]: 2025-12-02 09:01:04.103219275 +0000 UTC m=+0.078239901 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:01:04 localhost podman[102848]: 2025-12-02 09:01:04.20190213 +0000 UTC m=+0.176922746 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Dec 2 04:01:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:01:06 localhost recover_tripleo_nova_virtqemud[102994]: 62312 Dec 2 04:01:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:01:06 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:01:10 localhost podman[102995]: 2025-12-02 09:01:10.503803737 +0000 UTC m=+0.136338983 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-collectd) Dec 2 04:01:10 localhost podman[102995]: 2025-12-02 09:01:10.514111692 +0000 UTC m=+0.146646948 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 2 04:01:10 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:01:10 localhost podman[102996]: 2025-12-02 09:01:10.467908968 +0000 UTC m=+0.099074917 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1) Dec 2 04:01:10 localhost podman[102996]: 2025-12-02 09:01:10.60203172 +0000 UTC m=+0.233197719 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12) Dec 2 04:01:10 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:01:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:01:20 localhost podman[103035]: 2025-12-02 09:01:20.442253736 +0000 UTC m=+0.087016925 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Dec 2 04:01:20 localhost podman[103035]: 2025-12-02 09:01:20.660974518 +0000 UTC m=+0.305737717 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:01:20 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:01:29 localhost systemd[1]: tmp-crun.TJK4PH.mount: Deactivated successfully. Dec 2 04:01:29 localhost podman[103065]: 2025-12-02 09:01:29.482991967 +0000 UTC m=+0.122287097 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, com.redhat.component=openstack-cron-container, container_name=logrotate_crond) Dec 2 04:01:29 localhost podman[103065]: 2025-12-02 09:01:29.489984153 +0000 UTC m=+0.129279323 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:01:29 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:01:29 localhost podman[103066]: 2025-12-02 09:01:29.490338283 +0000 UTC m=+0.125394710 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:01:29 localhost podman[103073]: 2025-12-02 09:01:29.544727176 +0000 UTC m=+0.175535830 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=) Dec 2 04:01:29 localhost podman[103073]: 2025-12-02 09:01:29.561057782 +0000 UTC m=+0.191866456 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:01:29 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:01:29 localhost podman[103079]: 2025-12-02 09:01:29.650251475 +0000 UTC m=+0.277276217 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 2 04:01:29 localhost podman[103067]: 2025-12-02 09:01:29.696763517 +0000 UTC m=+0.327298064 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:01:29 localhost podman[103079]: 2025-12-02 09:01:29.710238847 +0000 UTC m=+0.337263519 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 04:01:29 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:01:29 localhost podman[103067]: 2025-12-02 09:01:29.745945831 +0000 UTC m=+0.376480358 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:01:29 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:01:29 localhost podman[103066]: 2025-12-02 09:01:29.863663295 +0000 UTC m=+0.498719752 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_migration_target) Dec 2 04:01:29 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:01:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:01:32 localhost systemd[1]: tmp-crun.7hPt0m.mount: Deactivated successfully. Dec 2 04:01:32 localhost podman[103183]: 2025-12-02 09:01:32.432692985 +0000 UTC m=+0.076493695 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 04:01:32 localhost podman[103184]: 2025-12-02 09:01:32.453690905 +0000 UTC m=+0.089300796 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 2 04:01:32 localhost podman[103184]: 2025-12-02 09:01:32.46396093 +0000 UTC m=+0.099570801 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Dec 2 04:01:32 localhost podman[103184]: unhealthy Dec 2 04:01:32 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:32 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:01:32 localhost podman[103183]: 2025-12-02 09:01:32.478446206 +0000 UTC m=+0.122246896 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 04:01:32 localhost podman[103183]: unhealthy Dec 2 04:01:32 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:01:32 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:01:33 localhost systemd[1]: tmp-crun.dbgBrQ.mount: Deactivated successfully. Dec 2 04:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:01:41 localhost podman[103223]: 2025-12-02 09:01:41.43643378 +0000 UTC m=+0.066876467 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, tcib_managed=true, version=17.1.12, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc.) Dec 2 04:01:41 localhost podman[103223]: 2025-12-02 09:01:41.47088768 +0000 UTC m=+0.101330337 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:01:41 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:01:41 localhost podman[103222]: 2025-12-02 09:01:41.490114914 +0000 UTC m=+0.121896468 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:01:41 localhost podman[103222]: 2025-12-02 09:01:41.52816487 +0000 UTC m=+0.159946414 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team) Dec 2 04:01:41 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:01:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:01:51 localhost podman[103261]: 2025-12-02 09:01:51.421834913 +0000 UTC m=+0.062516870 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, tcib_managed=true) Dec 2 04:01:51 localhost podman[103261]: 2025-12-02 09:01:51.638155041 +0000 UTC m=+0.278837028 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, version=17.1.12, config_id=tripleo_step1) Dec 2 04:01:51 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:02:00 localhost podman[103300]: 2025-12-02 09:02:00.475344516 +0000 UTC m=+0.097063394 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible) Dec 2 04:02:00 localhost podman[103294]: 2025-12-02 09:02:00.452299541 +0000 UTC m=+0.081383735 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, release=1761123044, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:02:00 localhost podman[103300]: 2025-12-02 09:02:00.529569764 +0000 UTC m=+0.151288632 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:02:00 localhost podman[103294]: 2025-12-02 09:02:00.53765063 +0000 UTC m=+0.166734914 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:02:00 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:02:00 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:02:00 localhost podman[103293]: 2025-12-02 09:02:00.510472395 +0000 UTC m=+0.139808246 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Dec 2 04:02:00 localhost podman[103291]: 2025-12-02 09:02:00.608784551 +0000 UTC m=+0.242387006 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:02:00 localhost podman[103291]: 2025-12-02 09:02:00.61399825 +0000 UTC m=+0.247600705 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 2 04:02:00 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:02:00 localhost podman[103292]: 2025-12-02 09:02:00.65819642 +0000 UTC m=+0.288360383 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc.) Dec 2 04:02:00 localhost podman[103293]: 2025-12-02 09:02:00.691328865 +0000 UTC m=+0.320664776 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:02:00 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:02:01 localhost podman[103292]: 2025-12-02 09:02:01.044134589 +0000 UTC m=+0.674298552 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_migration_target, vendor=Red Hat, Inc.) Dec 2 04:02:01 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:02:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:02:03 localhost systemd[1]: tmp-crun.KArVo7.mount: Deactivated successfully. Dec 2 04:02:03 localhost podman[103413]: 2025-12-02 09:02:03.441926735 +0000 UTC m=+0.085872345 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 04:02:03 localhost podman[103413]: 2025-12-02 09:02:03.481924603 +0000 UTC m=+0.125870143 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:02:03 localhost podman[103413]: unhealthy Dec 2 04:02:03 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:03 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:02:03 localhost podman[103414]: 2025-12-02 09:02:03.483319821 +0000 UTC m=+0.124537998 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:02:03 localhost podman[103414]: 2025-12-02 09:02:03.563317627 +0000 UTC m=+0.204535784 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, container_name=ovn_controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true) Dec 2 04:02:03 localhost podman[103414]: unhealthy Dec 2 04:02:03 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:03 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:02:12 localhost podman[103530]: 2025-12-02 09:02:12.45437167 +0000 UTC m=+0.089202355 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git) Dec 2 04:02:12 localhost systemd[1]: tmp-crun.cX3pwF.mount: Deactivated successfully. Dec 2 04:02:12 localhost podman[103531]: 2025-12-02 09:02:12.501080138 +0000 UTC m=+0.134526505 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 2 04:02:12 localhost podman[103530]: 2025-12-02 09:02:12.518333839 +0000 UTC m=+0.153164514 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 04:02:12 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:02:12 localhost podman[103531]: 2025-12-02 09:02:12.533748391 +0000 UTC m=+0.167194798 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z) Dec 2 04:02:12 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:02:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:02:22 localhost podman[103569]: 2025-12-02 09:02:22.441909173 +0000 UTC m=+0.084748545 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64) Dec 2 04:02:22 localhost podman[103569]: 2025-12-02 09:02:22.614473053 +0000 UTC m=+0.257312455 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=metrics_qdr, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 2 04:02:22 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:02:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:02:31 localhost podman[103599]: 2025-12-02 09:02:31.458140899 +0000 UTC m=+0.089331848 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12) Dec 2 04:02:31 localhost systemd[1]: tmp-crun.jGRnWN.mount: Deactivated successfully. Dec 2 04:02:31 localhost podman[103600]: 2025-12-02 09:02:31.508468673 +0000 UTC m=+0.137247597 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:02:31 localhost podman[103597]: 2025-12-02 09:02:31.548860152 +0000 UTC m=+0.185011134 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=) Dec 2 04:02:31 localhost podman[103601]: 2025-12-02 09:02:31.555298734 +0000 UTC m=+0.178649433 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:02:31 localhost podman[103597]: 2025-12-02 09:02:31.579909821 +0000 UTC m=+0.216060813 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 2 04:02:31 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:02:31 localhost podman[103600]: 2025-12-02 09:02:31.58996294 +0000 UTC m=+0.218741834 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:02:31 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:02:31 localhost podman[103601]: 2025-12-02 09:02:31.608753273 +0000 UTC m=+0.232104012 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:02:31 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:02:31 localhost podman[103599]: 2025-12-02 09:02:31.64050444 +0000 UTC m=+0.271695469 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 2 04:02:31 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:02:31 localhost podman[103598]: 2025-12-02 09:02:31.710980343 +0000 UTC m=+0.342823709 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:02:32 localhost podman[103598]: 2025-12-02 09:02:32.082104538 +0000 UTC m=+0.713947984 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:02:32 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:02:32 localhost systemd[1]: tmp-crun.rzLv7A.mount: Deactivated successfully. Dec 2 04:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:02:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:02:34 localhost podman[103713]: 2025-12-02 09:02:34.435513451 +0000 UTC m=+0.078319443 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, version=17.1.12) Dec 2 04:02:34 localhost systemd[1]: tmp-crun.ikL5dH.mount: Deactivated successfully. Dec 2 04:02:34 localhost podman[103714]: 2025-12-02 09:02:34.458241738 +0000 UTC m=+0.096760726 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:02:34 localhost podman[103714]: 2025-12-02 09:02:34.475005626 +0000 UTC m=+0.113524614 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 2 04:02:34 localhost podman[103714]: unhealthy Dec 2 04:02:34 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:34 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:02:34 localhost podman[103713]: 2025-12-02 09:02:34.521037286 +0000 UTC m=+0.163843308 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, release=1761123044, tcib_managed=true) Dec 2 04:02:34 localhost podman[103713]: unhealthy Dec 2 04:02:34 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:02:34 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:02:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:02:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:02:43 localhost podman[103754]: 2025-12-02 09:02:43.443588747 +0000 UTC m=+0.081734724 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 2 04:02:43 localhost podman[103754]: 2025-12-02 09:02:43.48300057 +0000 UTC m=+0.121146577 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid) Dec 2 04:02:43 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:02:43 localhost podman[103753]: 2025-12-02 09:02:43.503376065 +0000 UTC m=+0.143078144 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd) Dec 2 04:02:43 localhost podman[103753]: 2025-12-02 09:02:43.536023816 +0000 UTC m=+0.175725915 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, container_name=collectd, vcs-type=git, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:02:43 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:02:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:02:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:02:53 localhost recover_tripleo_nova_virtqemud[103796]: 62312 Dec 2 04:02:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:02:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:02:53 localhost podman[103792]: 2025-12-02 09:02:53.450117237 +0000 UTC m=+0.093549899 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 2 04:02:53 localhost podman[103792]: 2025-12-02 09:02:53.676081955 +0000 UTC m=+0.319514597 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:02:53 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:03:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:03:02 localhost podman[103823]: 2025-12-02 09:03:02.453319883 +0000 UTC m=+0.093501579 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, container_name=logrotate_crond) Dec 2 04:03:02 localhost podman[103823]: 2025-12-02 09:03:02.461852841 +0000 UTC m=+0.102034597 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:03:02 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:03:02 localhost podman[103825]: 2025-12-02 09:03:02.502263981 +0000 UTC m=+0.132251385 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 2 04:03:02 localhost systemd[1]: tmp-crun.0F4AYP.mount: Deactivated successfully. Dec 2 04:03:02 localhost podman[103824]: 2025-12-02 09:03:02.56328552 +0000 UTC m=+0.198886714 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 2 04:03:02 localhost podman[103832]: 2025-12-02 09:03:02.579722389 +0000 UTC m=+0.205066659 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044) Dec 2 04:03:02 localhost podman[103825]: 2025-12-02 09:03:02.591336299 +0000 UTC m=+0.221323763 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z) Dec 2 04:03:02 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:03:02 localhost podman[103831]: 2025-12-02 09:03:02.644967283 +0000 UTC m=+0.271963057 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:03:02 localhost podman[103831]: 2025-12-02 09:03:02.658870275 +0000 UTC m=+0.285866049 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible) Dec 2 04:03:02 localhost podman[103832]: 2025-12-02 09:03:02.664902626 +0000 UTC m=+0.290246856 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z) Dec 2 04:03:02 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:03:02 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:03:02 localhost podman[103824]: 2025-12-02 09:03:02.93904887 +0000 UTC m=+0.574650124 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 04:03:02 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:03:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:03:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:03:05 localhost podman[103945]: 2025-12-02 09:03:05.448069759 +0000 UTC m=+0.082233047 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, release=1761123044, version=17.1.12, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:03:05 localhost podman[103945]: 2025-12-02 09:03:05.464000635 +0000 UTC m=+0.098163913 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:03:05 localhost podman[103945]: unhealthy Dec 2 04:03:05 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:05 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:03:05 localhost systemd[1]: tmp-crun.XqWs76.mount: Deactivated successfully. Dec 2 04:03:05 localhost podman[103944]: 2025-12-02 09:03:05.553874436 +0000 UTC m=+0.190800078 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:03:05 localhost podman[103944]: 2025-12-02 09:03:05.573905531 +0000 UTC m=+0.210831173 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 04:03:05 localhost podman[103944]: unhealthy Dec 2 04:03:05 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:05 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:03:14 localhost systemd[1]: tmp-crun.O86cdt.mount: Deactivated successfully. Dec 2 04:03:14 localhost podman[104060]: 2025-12-02 09:03:14.444733752 +0000 UTC m=+0.078385955 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Dec 2 04:03:14 localhost podman[104060]: 2025-12-02 09:03:14.45666778 +0000 UTC m=+0.090319973 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 2 04:03:14 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:03:14 localhost podman[104059]: 2025-12-02 09:03:14.513998941 +0000 UTC m=+0.148143777 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:03:14 localhost podman[104059]: 2025-12-02 09:03:14.548511744 +0000 UTC m=+0.182656570 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, batch=17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:03:14 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:03:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:03:24 localhost systemd[1]: tmp-crun.rMn5dD.mount: Deactivated successfully. Dec 2 04:03:24 localhost podman[104099]: 2025-12-02 09:03:24.433128047 +0000 UTC m=+0.076335850 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:03:24 localhost podman[104099]: 2025-12-02 09:03:24.607226048 +0000 UTC m=+0.250433921 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4) Dec 2 04:03:24 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:03:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:03:33 localhost podman[104131]: 2025-12-02 09:03:33.432057378 +0000 UTC m=+0.067013360 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc.) Dec 2 04:03:33 localhost systemd[1]: tmp-crun.3NAghI.mount: Deactivated successfully. Dec 2 04:03:33 localhost podman[104142]: 2025-12-02 09:03:33.473975268 +0000 UTC m=+0.101346698 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044) Dec 2 04:03:33 localhost podman[104131]: 2025-12-02 09:03:33.519024682 +0000 UTC m=+0.153980654 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:03:33 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:03:33 localhost podman[104142]: 2025-12-02 09:03:33.570143308 +0000 UTC m=+0.197514728 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Dec 2 04:03:33 localhost podman[104128]: 2025-12-02 09:03:33.47667339 +0000 UTC m=+0.117080428 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:03:33 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Deactivated successfully. Dec 2 04:03:33 localhost podman[104129]: 2025-12-02 09:03:33.610501776 +0000 UTC m=+0.245512370 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, release=1761123044, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:03:33 localhost podman[104129]: 2025-12-02 09:03:33.635892714 +0000 UTC m=+0.270903338 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:03:33 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:03:33 localhost podman[104127]: 2025-12-02 09:03:33.647530605 +0000 UTC m=+0.287650785 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public) Dec 2 04:03:33 localhost podman[104127]: 2025-12-02 09:03:33.682014037 +0000 UTC m=+0.322134247 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 2 04:03:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:03:33 localhost podman[104128]: 2025-12-02 09:03:33.796854655 +0000 UTC m=+0.437261703 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com) Dec 2 04:03:33 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:03:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:03:36 localhost podman[104244]: 2025-12-02 09:03:36.456787487 +0000 UTC m=+0.095235405 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, tcib_managed=true, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=) Dec 2 04:03:36 localhost podman[104244]: 2025-12-02 09:03:36.500343391 +0000 UTC m=+0.138791339 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64) Dec 2 04:03:36 localhost podman[104244]: unhealthy Dec 2 04:03:36 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:36 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:03:36 localhost systemd[1]: tmp-crun.OajBs7.mount: Deactivated successfully. Dec 2 04:03:36 localhost podman[104243]: 2025-12-02 09:03:36.53323476 +0000 UTC m=+0.172100389 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Dec 2 04:03:36 localhost podman[104243]: 2025-12-02 09:03:36.580004649 +0000 UTC m=+0.218870308 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 2 04:03:36 localhost podman[104243]: unhealthy Dec 2 04:03:36 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:03:36 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:03:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:03:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:03:45 localhost systemd[1]: tmp-crun.2vPmdy.mount: Deactivated successfully. Dec 2 04:03:45 localhost podman[104284]: 2025-12-02 09:03:45.461837663 +0000 UTC m=+0.099491850 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 04:03:45 localhost podman[104284]: 2025-12-02 09:03:45.474188592 +0000 UTC m=+0.111842789 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:03:45 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:03:45 localhost podman[104285]: 2025-12-02 09:03:45.561809423 +0000 UTC m=+0.196215553 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64) Dec 2 04:03:45 localhost podman[104285]: 2025-12-02 09:03:45.571922404 +0000 UTC m=+0.206328534 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, config_id=tripleo_step3, vcs-type=git, version=17.1.12, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 04:03:45 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:03:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:03:55 localhost podman[104323]: 2025-12-02 09:03:55.426703092 +0000 UTC m=+0.071738448 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 2 04:03:55 localhost podman[104323]: 2025-12-02 09:03:55.618226258 +0000 UTC m=+0.263261594 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, tcib_managed=true, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 04:03:55 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:04:04 localhost podman[104353]: 2025-12-02 09:04:04.459509599 +0000 UTC m=+0.092607995 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 2 04:04:04 localhost systemd[1]: tmp-crun.FUFD2O.mount: Deactivated successfully. Dec 2 04:04:04 localhost podman[104352]: 2025-12-02 09:04:04.511664253 +0000 UTC m=+0.146304160 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 2 04:04:04 localhost systemd[1]: tmp-crun.HO532D.mount: Deactivated successfully. Dec 2 04:04:04 localhost podman[104354]: 2025-12-02 09:04:04.566995221 +0000 UTC m=+0.197565619 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 2 04:04:04 localhost podman[104354]: 2025-12-02 09:04:04.598077961 +0000 UTC m=+0.228648369 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:04:04 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:04:04 localhost podman[104355]: 2025-12-02 09:04:04.616244006 +0000 UTC m=+0.243030214 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 2 04:04:04 localhost podman[104351]: 2025-12-02 09:04:04.663798887 +0000 UTC m=+0.300155600 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, name=rhosp17/openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:04:04 localhost podman[104351]: 2025-12-02 09:04:04.674654457 +0000 UTC m=+0.311011210 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Dec 2 04:04:04 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:04:04 localhost podman[104353]: 2025-12-02 09:04:04.691154178 +0000 UTC m=+0.324252554 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 2 04:04:04 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Deactivated successfully. Dec 2 04:04:04 localhost podman[104355]: 2025-12-02 09:04:04.725150206 +0000 UTC m=+0.351936404 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Dec 2 04:04:04 localhost podman[104355]: unhealthy Dec 2 04:04:04 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:04 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:04:04 localhost podman[104352]: 2025-12-02 09:04:04.863193003 +0000 UTC m=+0.497832910 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 04:04:04 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:04:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:04:07 localhost podman[104472]: 2025-12-02 09:04:07.440508549 +0000 UTC m=+0.078982571 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1) Dec 2 04:04:07 localhost podman[104472]: 2025-12-02 09:04:07.453938188 +0000 UTC m=+0.092412220 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044) Dec 2 04:04:07 localhost podman[104473]: 2025-12-02 09:04:07.487964426 +0000 UTC m=+0.123077209 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 2 04:04:07 localhost podman[104472]: unhealthy Dec 2 04:04:07 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:07 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:04:07 localhost podman[104473]: 2025-12-02 09:04:07.528039278 +0000 UTC m=+0.163152081 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:04:07 localhost podman[104473]: unhealthy Dec 2 04:04:07 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:07 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:04:10 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:04:10 localhost recover_tripleo_nova_virtqemud[104593]: 62312 Dec 2 04:04:10 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:04:10 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:04:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:04:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:04:16 localhost systemd[1]: tmp-crun.NrzbcW.mount: Deactivated successfully. Dec 2 04:04:16 localhost podman[104594]: 2025-12-02 09:04:16.450827226 +0000 UTC m=+0.090141049 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 04:04:16 localhost podman[104594]: 2025-12-02 09:04:16.493263579 +0000 UTC m=+0.132577402 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3) Dec 2 04:04:16 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:04:16 localhost podman[104595]: 2025-12-02 09:04:16.51651828 +0000 UTC m=+0.153460151 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 04:04:16 localhost podman[104595]: 2025-12-02 09:04:16.52882176 +0000 UTC m=+0.165763651 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64) Dec 2 04:04:16 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:04:17 localhost systemd[1]: tmp-crun.FPVEki.mount: Deactivated successfully. Dec 2 04:04:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:04:26 localhost podman[104634]: 2025-12-02 09:04:26.438333689 +0000 UTC m=+0.075315134 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044) Dec 2 04:04:26 localhost podman[104634]: 2025-12-02 09:04:26.623345811 +0000 UTC m=+0.260326966 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 2 04:04:26 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:04:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:04:35 localhost podman[104664]: 2025-12-02 09:04:35.470992661 +0000 UTC m=+0.106491005 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, managed_by=tripleo_ansible) Dec 2 04:04:35 localhost podman[104665]: 2025-12-02 09:04:35.446716254 +0000 UTC m=+0.080414090 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true) Dec 2 04:04:35 localhost podman[104664]: 2025-12-02 09:04:35.509997094 +0000 UTC m=+0.145495438 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-cron-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 2 04:04:35 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:04:35 localhost podman[104673]: 2025-12-02 09:04:35.514632478 +0000 UTC m=+0.139302263 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:04:35 localhost podman[104666]: 2025-12-02 09:04:35.569579626 +0000 UTC m=+0.199318377 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step5, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:04:35 localhost podman[104666]: 2025-12-02 09:04:35.590873044 +0000 UTC m=+0.220611805 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:04:35 localhost podman[104666]: unhealthy Dec 2 04:04:35 localhost podman[104673]: 2025-12-02 09:04:35.598234261 +0000 UTC m=+0.222904046 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 2 04:04:35 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:35 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:04:35 localhost podman[104673]: unhealthy Dec 2 04:04:35 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:35 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:04:35 localhost podman[104667]: 2025-12-02 09:04:35.684511527 +0000 UTC m=+0.312631664 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:04:35 localhost podman[104667]: 2025-12-02 09:04:35.713117381 +0000 UTC m=+0.341237518 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public) Dec 2 04:04:35 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:04:35 localhost podman[104665]: 2025-12-02 09:04:35.832817028 +0000 UTC m=+0.466514854 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:04:35 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:04:36 localhost systemd[1]: tmp-crun.YHgNEr.mount: Deactivated successfully. Dec 2 04:04:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:04:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:04:38 localhost podman[104781]: 2025-12-02 09:04:38.459863071 +0000 UTC m=+0.092320718 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64) Dec 2 04:04:38 localhost systemd[1]: tmp-crun.Xh0Kdt.mount: Deactivated successfully. Dec 2 04:04:38 localhost podman[104780]: 2025-12-02 09:04:38.513855004 +0000 UTC m=+0.147560504 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, release=1761123044, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 2 04:04:38 localhost podman[104780]: 2025-12-02 09:04:38.525340821 +0000 UTC m=+0.159046301 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public) Dec 2 04:04:38 localhost podman[104780]: unhealthy Dec 2 04:04:38 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:38 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:04:38 localhost podman[104781]: 2025-12-02 09:04:38.580831622 +0000 UTC m=+0.213289259 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:04:38 localhost podman[104781]: unhealthy Dec 2 04:04:38 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:04:38 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:04:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:04:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:04:47 localhost systemd[1]: tmp-crun.r1i7ZS.mount: Deactivated successfully. Dec 2 04:04:47 localhost podman[104821]: 2025-12-02 09:04:47.443652658 +0000 UTC m=+0.086677307 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Dec 2 04:04:47 localhost podman[104820]: 2025-12-02 09:04:47.485520456 +0000 UTC m=+0.128604026 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:04:47 localhost podman[104820]: 2025-12-02 09:04:47.500958369 +0000 UTC m=+0.144041959 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, architecture=x86_64, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 2 04:04:47 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:04:47 localhost podman[104821]: 2025-12-02 09:04:47.556811832 +0000 UTC m=+0.199836461 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:04:47 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:04:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:04:57 localhost podman[104858]: 2025-12-02 09:04:57.441496125 +0000 UTC m=+0.084908409 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 2 04:04:57 localhost podman[104858]: 2025-12-02 09:04:57.668283824 +0000 UTC m=+0.311696108 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:04:57 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:05:06 localhost podman[104887]: 2025-12-02 09:05:06.452889701 +0000 UTC m=+0.093688044 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond) Dec 2 04:05:06 localhost podman[104887]: 2025-12-02 09:05:06.463105114 +0000 UTC m=+0.103903447 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Dec 2 04:05:06 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:05:06 localhost podman[104888]: 2025-12-02 09:05:06.496039054 +0000 UTC m=+0.136236480 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:05:06 localhost podman[104889]: 2025-12-02 09:05:06.564010159 +0000 UTC m=+0.198346739 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:05:06 localhost podman[104889]: 2025-12-02 09:05:06.607669676 +0000 UTC m=+0.242006286 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com) Dec 2 04:05:06 localhost podman[104889]: unhealthy Dec 2 04:05:06 localhost podman[104890]: 2025-12-02 09:05:06.617766026 +0000 UTC m=+0.248189962 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container) Dec 2 04:05:06 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:06 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:05:06 localhost podman[104902]: 2025-12-02 09:05:06.538807256 +0000 UTC m=+0.164889615 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, version=17.1.12) Dec 2 04:05:06 localhost podman[104890]: 2025-12-02 09:05:06.66695528 +0000 UTC m=+0.297379196 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 2 04:05:06 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:05:06 localhost podman[104902]: 2025-12-02 09:05:06.719379511 +0000 UTC m=+0.345461880 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com) Dec 2 04:05:06 localhost podman[104902]: unhealthy Dec 2 04:05:06 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:06 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:05:06 localhost podman[104888]: 2025-12-02 09:05:06.878603575 +0000 UTC m=+0.518800971 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4) Dec 2 04:05:06 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:05:09 localhost systemd[1]: tmp-crun.gxb0F5.mount: Deactivated successfully. Dec 2 04:05:09 localhost podman[105003]: 2025-12-02 09:05:09.433923182 +0000 UTC m=+0.073151445 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:05:09 localhost podman[105003]: 2025-12-02 09:05:09.452232871 +0000 UTC m=+0.091461214 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12) Dec 2 04:05:09 localhost podman[105003]: unhealthy Dec 2 04:05:09 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:09 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:05:09 localhost podman[105004]: 2025-12-02 09:05:09.549195871 +0000 UTC m=+0.182306711 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible) Dec 2 04:05:09 localhost podman[105004]: 2025-12-02 09:05:09.56823512 +0000 UTC m=+0.201345960 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 2 04:05:09 localhost podman[105004]: unhealthy Dec 2 04:05:09 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:09 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:05:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:05:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:05:18 localhost podman[105119]: 2025-12-02 09:05:18.460467481 +0000 UTC m=+0.099176490 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:05:18 localhost podman[105120]: 2025-12-02 09:05:18.510472697 +0000 UTC m=+0.147892192 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, container_name=iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 04:05:18 localhost podman[105119]: 2025-12-02 09:05:18.522795486 +0000 UTC m=+0.161504455 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd) Dec 2 04:05:18 localhost podman[105120]: 2025-12-02 09:05:18.526087135 +0000 UTC m=+0.163506600 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=iscsid, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:05:18 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:05:18 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:05:19 localhost sshd[105158]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:05:19 localhost systemd-logind[757]: New session 37 of user zuul. Dec 2 04:05:19 localhost systemd[1]: Started Session 37 of User zuul. Dec 2 04:05:20 localhost python3.9[105253]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:21 localhost python3.9[105347]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:21 localhost python3.9[105440]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:22 localhost python3.9[105534]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:23 localhost python3.9[105627]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:05:23 localhost python3.9[105718]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 2 04:05:25 localhost python3.9[105808]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:05:26 localhost python3.9[105900]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 2 04:05:27 localhost python3.9[105990]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:05:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:05:28 localhost python3.9[106038]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:05:28 localhost podman[106039]: 2025-12-02 09:05:28.445865648 +0000 UTC m=+0.082695901 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 2 04:05:28 localhost podman[106039]: 2025-12-02 09:05:28.614869853 +0000 UTC m=+0.251700106 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64) Dec 2 04:05:28 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:05:28 localhost systemd[1]: session-37.scope: Deactivated successfully. Dec 2 04:05:28 localhost systemd[1]: session-37.scope: Consumed 4.647s CPU time. Dec 2 04:05:28 localhost systemd-logind[757]: Session 37 logged out. Waiting for processes to exit. Dec 2 04:05:28 localhost systemd-logind[757]: Removed session 37. Dec 2 04:05:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23240 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=2771939589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EF06D0000000001030307) Dec 2 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57881 DF PROTO=TCP SPT=48754 DPT=9105 SEQ=2036713384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EF0EE0000000001030307) Dec 2 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23241 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=2771939589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EF4640000000001030307) Dec 2 04:05:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57882 DF PROTO=TCP SPT=48754 DPT=9105 SEQ=2036713384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EF4E40000000001030307) Dec 2 04:05:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24597 DF PROTO=TCP SPT=59492 DPT=9882 SEQ=281102290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EF8AC0000000001030307) Dec 2 04:05:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23242 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=2771939589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EFC650000000001030307) Dec 2 04:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24598 DF PROTO=TCP SPT=59492 DPT=9882 SEQ=281102290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EFCA40000000001030307) Dec 2 04:05:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57883 DF PROTO=TCP SPT=48754 DPT=9105 SEQ=2036713384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477EFCE40000000001030307) Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:05:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:05:37 localhost recover_tripleo_nova_virtqemud[106112]: 62312 Dec 2 04:05:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:05:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:05:37 localhost podman[106085]: 2025-12-02 09:05:37.462876031 +0000 UTC m=+0.095559704 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible) Dec 2 04:05:37 localhost podman[106085]: 2025-12-02 09:05:37.472661733 +0000 UTC m=+0.105345396 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64) Dec 2 04:05:37 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:05:37 localhost podman[106088]: 2025-12-02 09:05:37.523959743 +0000 UTC m=+0.149365671 container health_status 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute) Dec 2 04:05:37 localhost podman[106088]: 2025-12-02 09:05:37.552828044 +0000 UTC m=+0.178233902 container exec_died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, release=1761123044) Dec 2 04:05:37 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Deactivated successfully. Dec 2 04:05:37 localhost podman[106086]: 2025-12-02 09:05:37.559850462 +0000 UTC m=+0.189105303 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:05:37 localhost podman[106089]: 2025-12-02 09:05:37.622220928 +0000 UTC m=+0.246548217 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:05:37 localhost podman[106089]: 2025-12-02 09:05:37.666983004 +0000 UTC m=+0.291310263 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Dec 2 04:05:37 localhost podman[106089]: unhealthy Dec 2 04:05:37 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:37 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:05:37 localhost podman[106087]: 2025-12-02 09:05:37.683875066 +0000 UTC m=+0.309150320 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Dec 2 04:05:37 localhost podman[106087]: 2025-12-02 09:05:37.703974752 +0000 UTC m=+0.329249996 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute) Dec 2 04:05:37 localhost podman[106087]: unhealthy Dec 2 04:05:37 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:37 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:05:37 localhost podman[106086]: 2025-12-02 09:05:37.927775881 +0000 UTC m=+0.557030752 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vendor=Red Hat, Inc.) Dec 2 04:05:37 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65145 DF PROTO=TCP SPT=60756 DPT=9100 SEQ=314869193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F045B0000000001030307) Dec 2 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24599 DF PROTO=TCP SPT=59492 DPT=9882 SEQ=281102290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F04A40000000001030307) Dec 2 04:05:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65146 DF PROTO=TCP SPT=60756 DPT=9100 SEQ=314869193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F08650000000001030307) Dec 2 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:05:40 localhost podman[106206]: 2025-12-02 09:05:40.436730459 +0000 UTC m=+0.075428966 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1) Dec 2 04:05:40 localhost podman[106206]: 2025-12-02 09:05:40.448272297 +0000 UTC m=+0.086970784 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public) Dec 2 04:05:40 localhost podman[106206]: unhealthy Dec 2 04:05:40 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:40 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:05:40 localhost systemd[1]: tmp-crun.H0u1XW.mount: Deactivated successfully. Dec 2 04:05:40 localhost podman[106207]: 2025-12-02 09:05:40.507449088 +0000 UTC m=+0.142456907 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 04:05:40 localhost podman[106207]: 2025-12-02 09:05:40.547317613 +0000 UTC m=+0.182325432 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 04:05:40 localhost podman[106207]: unhealthy Dec 2 04:05:40 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:05:40 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23243 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=2771939589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F0C240000000001030307) Dec 2 04:05:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57884 DF PROTO=TCP SPT=48754 DPT=9105 SEQ=2036713384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F0CA40000000001030307) Dec 2 04:05:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65147 DF PROTO=TCP SPT=60756 DPT=9100 SEQ=314869193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F10640000000001030307) Dec 2 04:05:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24600 DF PROTO=TCP SPT=59492 DPT=9882 SEQ=281102290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F14640000000001030307) Dec 2 04:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65148 DF PROTO=TCP SPT=60756 DPT=9100 SEQ=314869193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F20250000000001030307) Dec 2 04:05:48 localhost sshd[106244]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:05:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:05:48 localhost systemd-logind[757]: New session 38 of user zuul. Dec 2 04:05:48 localhost systemd[1]: Started Session 38 of User zuul. Dec 2 04:05:48 localhost podman[106247]: 2025-12-02 09:05:48.781113264 +0000 UTC m=+0.081519900 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 2 04:05:48 localhost podman[106247]: 2025-12-02 09:05:48.795135648 +0000 UTC m=+0.095542334 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z) Dec 2 04:05:48 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:05:48 localhost podman[106246]: 2025-12-02 09:05:48.89778293 +0000 UTC m=+0.200690432 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, container_name=collectd, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 04:05:48 localhost podman[106246]: 2025-12-02 09:05:48.908463795 +0000 UTC m=+0.211371267 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:05:48 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23244 DF PROTO=TCP SPT=51104 DPT=9102 SEQ=2771939589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F2BE40000000001030307) Dec 2 04:05:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57885 DF PROTO=TCP SPT=48754 DPT=9105 SEQ=2036713384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F2DE50000000001030307) Dec 2 04:05:49 localhost python3.9[106378]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:05:49 localhost systemd[1]: Reloading. Dec 2 04:05:49 localhost systemd-rc-local-generator[106399]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:05:49 localhost systemd-sysv-generator[106402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:05:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:05:51 localhost python3.9[106504]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:05:51 localhost network[106521]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:05:51 localhost network[106522]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:05:51 localhost network[106523]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:05:51 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24601 DF PROTO=TCP SPT=59492 DPT=9882 SEQ=281102290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F33E50000000001030307) Dec 2 04:05:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1123 DF PROTO=TCP SPT=56514 DPT=9101 SEQ=1969001915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F37DF0000000001030307) Dec 2 04:05:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:05:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1124 DF PROTO=TCP SPT=56514 DPT=9101 SEQ=1969001915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F3BE50000000001030307) Dec 2 04:05:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1125 DF PROTO=TCP SPT=56514 DPT=9101 SEQ=1969001915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F43E40000000001030307) Dec 2 04:05:57 localhost python3.9[106720]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:05:57 localhost network[106737]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:05:57 localhost network[106738]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:05:57 localhost network[106739]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:05:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:05:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:05:58 localhost podman[106797]: 2025-12-02 09:05:58.757682532 +0000 UTC m=+0.090557020 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:05:58 localhost podman[106797]: 2025-12-02 09:05:58.977885984 +0000 UTC m=+0.310760532 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:05:58 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:05:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1126 DF PROTO=TCP SPT=56514 DPT=9101 SEQ=1969001915 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F53A50000000001030307) Dec 2 04:06:01 localhost python3.9[106967]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:06:01 localhost systemd[1]: Reloading. Dec 2 04:06:01 localhost systemd-sysv-generator[106993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:06:01 localhost systemd-rc-local-generator[106990]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:06:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:06:02 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 2 04:06:02 localhost systemd[1]: tmp-crun.q1jhBh.mount: Deactivated successfully. Dec 2 04:06:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48720 DF PROTO=TCP SPT=35558 DPT=9102 SEQ=2174524384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F659D0000000001030307) Dec 2 04:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37809 DF PROTO=TCP SPT=41890 DPT=9105 SEQ=2583707932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F661E0000000001030307) Dec 2 04:06:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48722 DF PROTO=TCP SPT=35558 DPT=9102 SEQ=2174524384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F71A40000000001030307) Dec 2 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:06:07 localhost podman[107023]: Error: container 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be is not running Dec 2 04:06:07 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Main process exited, code=exited, status=125/n/a Dec 2 04:06:07 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed with result 'exit-code'. Dec 2 04:06:07 localhost systemd[1]: tmp-crun.tPbIQg.mount: Deactivated successfully. Dec 2 04:06:07 localhost podman[107022]: 2025-12-02 09:06:07.7448696 +0000 UTC m=+0.130724604 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public) Dec 2 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:06:07 localhost podman[107022]: 2025-12-02 09:06:07.791925827 +0000 UTC m=+0.177780811 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 2 04:06:07 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:06:07 localhost podman[107055]: 2025-12-02 09:06:07.857757235 +0000 UTC m=+0.080269604 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, vcs-type=git, container_name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 2 04:06:07 localhost podman[107054]: 2025-12-02 09:06:07.904284849 +0000 UTC m=+0.128175116 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044) Dec 2 04:06:07 localhost podman[107054]: 2025-12-02 09:06:07.951072739 +0000 UTC m=+0.174963026 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:06:07 localhost podman[107054]: unhealthy Dec 2 04:06:07 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:07 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:06:08 localhost podman[107055]: 2025-12-02 09:06:08.008211085 +0000 UTC m=+0.230723454 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi) Dec 2 04:06:08 localhost podman[107055]: unhealthy Dec 2 04:06:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:06:08 localhost podman[107104]: 2025-12-02 09:06:08.103764868 +0000 UTC m=+0.118669071 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:06:08 localhost podman[107104]: 2025-12-02 09:06:08.476026313 +0000 UTC m=+0.490930546 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:06:08 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:06:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9314 DF PROTO=TCP SPT=57020 DPT=9100 SEQ=1014919568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F7DA40000000001030307) Dec 2 04:06:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:06:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:06:10 localhost systemd[1]: tmp-crun.06cTbv.mount: Deactivated successfully. Dec 2 04:06:10 localhost podman[107128]: 2025-12-02 09:06:10.692357003 +0000 UTC m=+0.086526042 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ovn-controller) Dec 2 04:06:10 localhost podman[107128]: 2025-12-02 09:06:10.736908704 +0000 UTC m=+0.131077733 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public) Dec 2 04:06:10 localhost podman[107127]: 2025-12-02 09:06:10.739937524 +0000 UTC m=+0.133831836 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 04:06:10 localhost podman[107127]: 2025-12-02 09:06:10.779026319 +0000 UTC m=+0.172920571 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 2 04:06:10 localhost podman[107127]: unhealthy Dec 2 04:06:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:10 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:06:10 localhost podman[107128]: unhealthy Dec 2 04:06:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:10 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53815 DF PROTO=TCP SPT=50542 DPT=9882 SEQ=3057712063 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F89A50000000001030307) Dec 2 04:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9316 DF PROTO=TCP SPT=57020 DPT=9100 SEQ=1014919568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477F95650000000001030307) Dec 2 04:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:06:18 localhost systemd[1]: tmp-crun.lDw2dd.mount: Deactivated successfully. Dec 2 04:06:18 localhost podman[107242]: 2025-12-02 09:06:18.953049512 +0000 UTC m=+0.090717444 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:06:18 localhost podman[107242]: 2025-12-02 09:06:18.963451241 +0000 UTC m=+0.101119143 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 2 04:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:06:18 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:06:19 localhost podman[107259]: 2025-12-02 09:06:19.047314711 +0000 UTC m=+0.071860851 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 2 04:06:19 localhost podman[107259]: 2025-12-02 09:06:19.057733419 +0000 UTC m=+0.082279549 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 2 04:06:19 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:06:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48724 DF PROTO=TCP SPT=35558 DPT=9102 SEQ=2174524384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FA1E40000000001030307) Dec 2 04:06:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56518 DF PROTO=TCP SPT=36398 DPT=9101 SEQ=1443848096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FAD100000000001030307) Dec 2 04:06:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56520 DF PROTO=TCP SPT=36398 DPT=9101 SEQ=1443848096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FB9240000000001030307) Dec 2 04:06:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:06:29 localhost podman[107280]: 2025-12-02 09:06:29.182796349 +0000 UTC m=+0.076032523 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12) Dec 2 04:06:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56521 DF PROTO=TCP SPT=36398 DPT=9101 SEQ=1443848096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FC8E40000000001030307) Dec 2 04:06:29 localhost podman[107280]: 2025-12-02 09:06:29.347436196 +0000 UTC m=+0.240672330 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 2 04:06:29 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:06:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56498 DF PROTO=TCP SPT=53246 DPT=9102 SEQ=1239950182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FDACE0000000001030307) Dec 2 04:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32615 DF PROTO=TCP SPT=59620 DPT=9105 SEQ=2395432243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FDB4E0000000001030307) Dec 2 04:06:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56500 DF PROTO=TCP SPT=53246 DPT=9102 SEQ=1239950182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FE6E40000000001030307) Dec 2 04:06:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:06:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:06:37 localhost systemd[1]: tmp-crun.qNMF5o.mount: Deactivated successfully. Dec 2 04:06:37 localhost podman[107309]: 2025-12-02 09:06:37.922282388 +0000 UTC m=+0.060057215 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, name=rhosp17/openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 2 04:06:37 localhost podman[107309]: 2025-12-02 09:06:37.929532492 +0000 UTC m=+0.067307349 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64) Dec 2 04:06:37 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:06:37 localhost podman[107310]: Error: container 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be is not running Dec 2 04:06:37 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Main process exited, code=exited, status=125/n/a Dec 2 04:06:37 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed with result 'exit-code'. Dec 2 04:06:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:06:38 localhost podman[107340]: 2025-12-02 09:06:38.028573457 +0000 UTC m=+0.053721996 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, version=17.1.12) Dec 2 04:06:38 localhost podman[107340]: 2025-12-02 09:06:38.041995916 +0000 UTC m=+0.067144445 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, tcib_managed=true) Dec 2 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:06:38 localhost podman[107340]: unhealthy Dec 2 04:06:38 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:38 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:06:38 localhost podman[107362]: 2025-12-02 09:06:38.116817025 +0000 UTC m=+0.060884828 container health_status 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 2 04:06:38 localhost podman[107362]: 2025-12-02 09:06:38.143869897 +0000 UTC m=+0.087937720 container exec_died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible) Dec 2 04:06:38 localhost podman[107362]: unhealthy Dec 2 04:06:38 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:38 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:06:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:06:39 localhost podman[107387]: 2025-12-02 09:06:39.444876055 +0000 UTC m=+0.084415726 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, container_name=nova_migration_target, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:06:39 localhost podman[107387]: 2025-12-02 09:06:39.864869766 +0000 UTC m=+0.504409427 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true) Dec 2 04:06:39 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:06:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44603 DF PROTO=TCP SPT=44326 DPT=9100 SEQ=3333704589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FF2E40000000001030307) Dec 2 04:06:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:06:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:06:41 localhost podman[107411]: 2025-12-02 09:06:41.439455642 +0000 UTC m=+0.078317804 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:06:41 localhost podman[107411]: 2025-12-02 09:06:41.450889017 +0000 UTC m=+0.089751169 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:06:41 localhost podman[107411]: unhealthy Dec 2 04:06:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:41 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:06:41 localhost podman[107410]: 2025-12-02 09:06:41.482723277 +0000 UTC m=+0.124634330 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:06:41 localhost podman[107410]: 2025-12-02 09:06:41.521102413 +0000 UTC m=+0.163013476 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 2 04:06:41 localhost podman[107410]: unhealthy Dec 2 04:06:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:06:41 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:06:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54581 DF PROTO=TCP SPT=53504 DPT=9882 SEQ=2933903461 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A477FFEE40000000001030307) Dec 2 04:06:44 localhost podman[107007]: time="2025-12-02T09:06:44Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Dec 2 04:06:44 localhost systemd[1]: tmp-crun.8vAHCu.mount: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: libpod-4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.scope: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: libpod-4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.scope: Consumed 6.232s CPU time. Dec 2 04:06:44 localhost podman[107007]: 2025-12-02 09:06:44.140287626 +0000 UTC m=+42.091105041 container stop 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:06:44 localhost podman[107007]: 2025-12-02 09:06:44.174057448 +0000 UTC m=+42.124874893 container died 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044) Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.timer: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be. Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed to open /run/systemd/transient/4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: No such file or directory Dec 2 04:06:44 localhost systemd[1]: tmp-crun.Ynl3VB.mount: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be-userdata-shm.mount: Deactivated successfully. Dec 2 04:06:44 localhost podman[107007]: 2025-12-02 09:06:44.285423803 +0000 UTC m=+42.236241198 container cleanup 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:06:44 localhost podman[107007]: ceilometer_agent_compute Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.timer: Failed to open /run/systemd/transient/4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.timer: No such file or directory Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed to open /run/systemd/transient/4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: No such file or directory Dec 2 04:06:44 localhost podman[107450]: 2025-12-02 09:06:44.301470921 +0000 UTC m=+0.145784365 container cleanup 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 2 04:06:44 localhost systemd[1]: libpod-conmon-4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.scope: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.timer: Failed to open /run/systemd/transient/4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.timer: No such file or directory Dec 2 04:06:44 localhost systemd[1]: 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: Failed to open /run/systemd/transient/4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be.service: No such file or directory Dec 2 04:06:44 localhost podman[107465]: 2025-12-02 09:06:44.386658668 +0000 UTC m=+0.050025948 container cleanup 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Dec 2 04:06:44 localhost podman[107465]: ceilometer_agent_compute Dec 2 04:06:44 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Dec 2 04:06:44 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 2 04:06:44 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.045s CPU time, no IO. Dec 2 04:06:45 localhost systemd[1]: var-lib-containers-storage-overlay-b8a416db81901f96d6fd72f5969e70208d019cecbe75cef9d1ed7630b319da67-merged.mount: Deactivated successfully. Dec 2 04:06:45 localhost python3.9[107568]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:06:45 localhost systemd[1]: Reloading. Dec 2 04:06:45 localhost systemd-rc-local-generator[107592]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:06:45 localhost systemd-sysv-generator[107597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:06:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:06:45 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Dec 2 04:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44605 DF PROTO=TCP SPT=44326 DPT=9100 SEQ=3333704589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47800AA40000000001030307) Dec 2 04:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:06:49 localhost podman[107622]: 2025-12-02 09:06:49.444045097 +0000 UTC m=+0.080250364 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:06:49 localhost podman[107622]: 2025-12-02 09:06:49.455950925 +0000 UTC m=+0.092156122 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 2 04:06:49 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:06:49 localhost podman[107621]: 2025-12-02 09:06:49.546648469 +0000 UTC m=+0.185426965 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-collectd) Dec 2 04:06:49 localhost podman[107621]: 2025-12-02 09:06:49.555182787 +0000 UTC m=+0.193961213 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:06:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32619 DF PROTO=TCP SPT=59620 DPT=9105 SEQ=2395432243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478017E40000000001030307) Dec 2 04:06:49 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:06:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30898 DF PROTO=TCP SPT=56690 DPT=9101 SEQ=4024337551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478022400000000001030307) Dec 2 04:06:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30900 DF PROTO=TCP SPT=56690 DPT=9101 SEQ=4024337551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47802E640000000001030307) Dec 2 04:06:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30901 DF PROTO=TCP SPT=56690 DPT=9101 SEQ=4024337551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47803E250000000001030307) Dec 2 04:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:06:59 localhost podman[107659]: 2025-12-02 09:06:59.688333311 +0000 UTC m=+0.083196754 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:06:59 localhost podman[107659]: 2025-12-02 09:06:59.884316896 +0000 UTC m=+0.279180319 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64) Dec 2 04:06:59 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:07:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34102 DF PROTO=TCP SPT=47628 DPT=9102 SEQ=3160238718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47804FFE0000000001030307) Dec 2 04:07:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64345 DF PROTO=TCP SPT=59134 DPT=9105 SEQ=833223188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780507E0000000001030307) Dec 2 04:07:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34104 DF PROTO=TCP SPT=47628 DPT=9102 SEQ=3160238718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47805C240000000001030307) Dec 2 04:07:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:07:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:07:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:07:08 localhost podman[107690]: Error: container 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe is not running Dec 2 04:07:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Main process exited, code=exited, status=125/n/a Dec 2 04:07:08 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed with result 'exit-code'. Dec 2 04:07:08 localhost podman[107688]: 2025-12-02 09:07:08.455260695 +0000 UTC m=+0.092030430 container health_status 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044) Dec 2 04:07:08 localhost podman[107688]: 2025-12-02 09:07:08.461003908 +0000 UTC m=+0.097773673 container exec_died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, version=17.1.12) Dec 2 04:07:08 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Deactivated successfully. Dec 2 04:07:08 localhost systemd[1]: tmp-crun.T7d8XL.mount: Deactivated successfully. Dec 2 04:07:08 localhost podman[107689]: 2025-12-02 09:07:08.506986546 +0000 UTC m=+0.142075857 container health_status 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, container_name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:07:08 localhost podman[107689]: 2025-12-02 09:07:08.552243355 +0000 UTC m=+0.187332696 container exec_died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:07:08 localhost podman[107689]: unhealthy Dec 2 04:07:08 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:08 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:07:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32717 DF PROTO=TCP SPT=37158 DPT=9100 SEQ=685219675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478067E40000000001030307) Dec 2 04:07:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:07:10 localhost podman[107741]: 2025-12-02 09:07:10.447582001 +0000 UTC m=+0.083735048 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 2 04:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:07:10 localhost podman[107741]: 2025-12-02 09:07:10.819229539 +0000 UTC m=+0.455382596 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 2 04:07:10 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:07:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:07:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:07:12 localhost podman[107764]: 2025-12-02 09:07:12.435728184 +0000 UTC m=+0.077149551 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:07:12 localhost podman[107764]: 2025-12-02 09:07:12.456012267 +0000 UTC m=+0.097433644 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:07:12 localhost podman[107764]: unhealthy Dec 2 04:07:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:12 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:07:12 localhost podman[107765]: 2025-12-02 09:07:12.501977804 +0000 UTC m=+0.139797766 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true) Dec 2 04:07:12 localhost podman[107765]: 2025-12-02 09:07:12.515549287 +0000 UTC m=+0.153369279 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, architecture=x86_64, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 2 04:07:12 localhost podman[107765]: unhealthy Dec 2 04:07:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:12 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:07:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9319 DF PROTO=TCP SPT=57020 DPT=9100 SEQ=1014919568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478073E40000000001030307) Dec 2 04:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32719 DF PROTO=TCP SPT=37158 DPT=9100 SEQ=685219675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47807FA40000000001030307) Dec 2 04:07:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34106 DF PROTO=TCP SPT=47628 DPT=9102 SEQ=3160238718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47808BE40000000001030307) Dec 2 04:07:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:07:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:07:19 localhost podman[107879]: 2025-12-02 09:07:19.723499072 +0000 UTC m=+0.102103570 container health_status c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:07:19 localhost podman[107879]: 2025-12-02 09:07:19.739087979 +0000 UTC m=+0.117692507 container exec_died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com) Dec 2 04:07:19 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Deactivated successfully. Dec 2 04:07:19 localhost systemd[1]: tmp-crun.xQkXgI.mount: Deactivated successfully. Dec 2 04:07:19 localhost podman[107878]: 2025-12-02 09:07:19.830606913 +0000 UTC m=+0.211067119 container health_status 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, container_name=collectd, tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:07:19 localhost podman[107878]: 2025-12-02 09:07:19.844028242 +0000 UTC m=+0.224464748 container exec_died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, build-date=2025-11-18T22:51:28Z) Dec 2 04:07:19 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Deactivated successfully. Dec 2 04:07:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22068 DF PROTO=TCP SPT=52596 DPT=9101 SEQ=2844187226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478097700000000001030307) Dec 2 04:07:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22070 DF PROTO=TCP SPT=52596 DPT=9101 SEQ=2844187226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780A3640000000001030307) Dec 2 04:07:27 localhost podman[107608]: time="2025-12-02T09:07:27Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Dec 2 04:07:27 localhost systemd[1]: libpod-7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.scope: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: libpod-7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.scope: Consumed 6.287s CPU time. Dec 2 04:07:27 localhost podman[107608]: 2025-12-02 09:07:27.762276971 +0000 UTC m=+42.095603588 container died 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true) Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.timer: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe. Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed to open /run/systemd/transient/7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: No such file or directory Dec 2 04:07:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: var-lib-containers-storage-overlay-fddcd6dd4df186203ff55efce1dca7750680c9de7878dc7d77dfefe109af9b62-merged.mount: Deactivated successfully. Dec 2 04:07:27 localhost podman[107608]: 2025-12-02 09:07:27.816242173 +0000 UTC m=+42.149568770 container cleanup 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 2 04:07:27 localhost podman[107608]: ceilometer_agent_ipmi Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.timer: Failed to open /run/systemd/transient/7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.timer: No such file or directory Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed to open /run/systemd/transient/7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: No such file or directory Dec 2 04:07:27 localhost podman[107917]: 2025-12-02 09:07:27.850567259 +0000 UTC m=+0.081597130 container cleanup 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, release=1761123044, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 2 04:07:27 localhost systemd[1]: libpod-conmon-7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.scope: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.timer: Failed to open /run/systemd/transient/7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.timer: No such file or directory Dec 2 04:07:27 localhost systemd[1]: 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: Failed to open /run/systemd/transient/7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe.service: No such file or directory Dec 2 04:07:27 localhost podman[107933]: 2025-12-02 09:07:27.931410409 +0000 UTC m=+0.056313345 container cleanup 7c38d69431417999192726acf4fb66e48576980efee171d6750a496e1575f4fe (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64) Dec 2 04:07:27 localhost podman[107933]: ceilometer_agent_ipmi Dec 2 04:07:27 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Dec 2 04:07:27 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Dec 2 04:07:28 localhost python3.9[108036]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:28 localhost systemd[1]: Reloading. Dec 2 04:07:28 localhost systemd-rc-local-generator[108065]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:28 localhost systemd-sysv-generator[108069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:29 localhost systemd[1]: Stopping collectd container... Dec 2 04:07:29 localhost systemd[1]: tmp-crun.55fuFM.mount: Deactivated successfully. Dec 2 04:07:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22071 DF PROTO=TCP SPT=52596 DPT=9101 SEQ=2844187226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780B3250000000001030307) Dec 2 04:07:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:07:30 localhost podman[108091]: 2025-12-02 09:07:30.43748989 +0000 UTC m=+0.082661739 container health_status 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 04:07:30 localhost systemd[1]: libpod-237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.scope: Deactivated successfully. Dec 2 04:07:30 localhost systemd[1]: libpod-237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.scope: Consumed 2.179s CPU time. Dec 2 04:07:30 localhost podman[108091]: 2025-12-02 09:07:30.656009668 +0000 UTC m=+0.301181447 container exec_died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:07:30 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Deactivated successfully. Dec 2 04:07:30 localhost podman[108077]: 2025-12-02 09:07:30.708448079 +0000 UTC m=+1.646039745 container died 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc.) Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.timer: Deactivated successfully. Dec 2 04:07:30 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6. Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Failed to open /run/systemd/transient/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: No such file or directory Dec 2 04:07:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:30 localhost podman[108077]: 2025-12-02 09:07:30.748478869 +0000 UTC m=+1.686070535 container cleanup 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 2 04:07:30 localhost podman[108077]: collectd Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.timer: Failed to open /run/systemd/transient/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.timer: No such file or directory Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Failed to open /run/systemd/transient/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: No such file or directory Dec 2 04:07:30 localhost podman[108120]: 2025-12-02 09:07:30.769541222 +0000 UTC m=+0.104571985 container cleanup 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, url=https://www.redhat.com) Dec 2 04:07:30 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:30 localhost systemd[1]: libpod-conmon-237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.scope: Deactivated successfully. Dec 2 04:07:30 localhost podman[108149]: error opening file `/run/crun/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6/status`: No such file or directory Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.timer: Failed to open /run/systemd/transient/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.timer: No such file or directory Dec 2 04:07:30 localhost systemd[1]: 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: Failed to open /run/systemd/transient/237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6.service: No such file or directory Dec 2 04:07:30 localhost podman[108136]: 2025-12-02 09:07:30.865360031 +0000 UTC m=+0.067929925 container cleanup 237065159bc32e1055f58656f49d027a128e2751373c9ed9dce9083e14cc3ac6 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 2 04:07:30 localhost podman[108136]: collectd Dec 2 04:07:30 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Dec 2 04:07:30 localhost systemd[1]: Stopped collectd container. Dec 2 04:07:31 localhost systemd[1]: var-lib-containers-storage-overlay-082042a751b48593af3e4b42b09156dbc115dd133d7891319f3ff1ad0b672b0b-merged.mount: Deactivated successfully. Dec 2 04:07:31 localhost python3.9[108242]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:31 localhost systemd[1]: Reloading. Dec 2 04:07:31 localhost systemd-rc-local-generator[108272]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:31 localhost systemd-sysv-generator[108275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:07:31 localhost systemd[1]: Stopping iscsid container... Dec 2 04:07:31 localhost recover_tripleo_nova_virtqemud[108285]: 62312 Dec 2 04:07:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:07:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:07:32 localhost systemd[1]: libpod-c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.scope: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: libpod-c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.scope: Consumed 1.145s CPU time. Dec 2 04:07:32 localhost podman[108284]: 2025-12-02 09:07:32.063785828 +0000 UTC m=+0.079036032 container died c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, distribution-scope=public, release=1761123044, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=iscsid, io.buildah.version=1.41.4) Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.timer: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52. Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Failed to open /run/systemd/transient/c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: No such file or directory Dec 2 04:07:32 localhost systemd[1]: tmp-crun.gn3GKU.mount: Deactivated successfully. Dec 2 04:07:32 localhost podman[108284]: 2025-12-02 09:07:32.1147724 +0000 UTC m=+0.130022594 container cleanup c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 2 04:07:32 localhost podman[108284]: iscsid Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.timer: Failed to open /run/systemd/transient/c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.timer: No such file or directory Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Failed to open /run/systemd/transient/c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: No such file or directory Dec 2 04:07:32 localhost podman[108298]: 2025-12-02 09:07:32.193117133 +0000 UTC m=+0.118997699 container cleanup c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:07:32 localhost systemd[1]: libpod-conmon-c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.scope: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.timer: Failed to open /run/systemd/transient/c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.timer: No such file or directory Dec 2 04:07:32 localhost systemd[1]: c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: Failed to open /run/systemd/transient/c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52.service: No such file or directory Dec 2 04:07:32 localhost podman[108314]: 2025-12-02 09:07:32.286840878 +0000 UTC m=+0.061906605 container cleanup c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 2 04:07:32 localhost podman[108314]: iscsid Dec 2 04:07:32 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: Stopped iscsid container. Dec 2 04:07:32 localhost systemd[1]: var-lib-containers-storage-overlay-eee6dae47ff617871c47add2aa57f33c2f7e68905855055afb3a7b04648ecacd-merged.mount: Deactivated successfully. Dec 2 04:07:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c88e11ade7cb42412d755bbc83b537b85df2401b0966bb2fdc0b1069a90bbe52-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:32 localhost python3.9[108418]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:33 localhost systemd[1]: Reloading. Dec 2 04:07:33 localhost systemd-sysv-generator[108450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:33 localhost systemd-rc-local-generator[108445]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:33 localhost systemd[1]: Stopping logrotate_crond container... Dec 2 04:07:33 localhost systemd[1]: libpod-0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.scope: Deactivated successfully. Dec 2 04:07:33 localhost podman[108459]: 2025-12-02 09:07:33.463685698 +0000 UTC m=+0.077805240 container died 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.timer: Deactivated successfully. Dec 2 04:07:33 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b. Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Failed to open /run/systemd/transient/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: No such file or directory Dec 2 04:07:33 localhost podman[108459]: 2025-12-02 09:07:33.574992071 +0000 UTC m=+0.189111583 container cleanup 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4) Dec 2 04:07:33 localhost podman[108459]: logrotate_crond Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.timer: Failed to open /run/systemd/transient/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.timer: No such file or directory Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Failed to open /run/systemd/transient/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: No such file or directory Dec 2 04:07:33 localhost podman[108472]: 2025-12-02 09:07:33.598667073 +0000 UTC m=+0.131118395 container cleanup 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 2 04:07:33 localhost systemd[1]: libpod-conmon-0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.scope: Deactivated successfully. Dec 2 04:07:33 localhost podman[108503]: error opening file `/run/crun/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b/status`: No such file or directory Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.timer: Failed to open /run/systemd/transient/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.timer: No such file or directory Dec 2 04:07:33 localhost systemd[1]: 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: Failed to open /run/systemd/transient/0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b.service: No such file or directory Dec 2 04:07:33 localhost podman[108490]: 2025-12-02 09:07:33.695021137 +0000 UTC m=+0.068917971 container cleanup 0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z) Dec 2 04:07:33 localhost podman[108490]: logrotate_crond Dec 2 04:07:33 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Dec 2 04:07:33 localhost systemd[1]: Stopped logrotate_crond container. Dec 2 04:07:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29877 DF PROTO=TCP SPT=32840 DPT=9102 SEQ=798519461 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780C52E0000000001030307) Dec 2 04:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24303 DF PROTO=TCP SPT=46326 DPT=9105 SEQ=4168036757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780C5AF0000000001030307) Dec 2 04:07:34 localhost python3.9[108596]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:34 localhost systemd[1]: var-lib-containers-storage-overlay-d5dc9262725001f2f73a799452ce705d444359a7e34fc5a93c05c8a39696c355-merged.mount: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d5eefb6ce98c7322a99b5b5705f907e8a9ccd858f2a9e4deb8757260d2c605b-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:34 localhost systemd[1]: Reloading. Dec 2 04:07:34 localhost systemd-rc-local-generator[108620]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:34 localhost systemd-sysv-generator[108624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:34 localhost systemd[1]: Stopping metrics_qdr container... Dec 2 04:07:34 localhost kernel: qdrouterd[54996]: segfault at 0 ip 00007fa2f77fe7cb sp 00007ffd821391b0 error 4 in libc.so.6[7fa2f779b000+175000] Dec 2 04:07:34 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Dec 2 04:07:34 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Dec 2 04:07:34 localhost systemd[1]: Started Process Core Dump (PID 108651/UID 0). Dec 2 04:07:35 localhost systemd-coredump[108652]: Resource limits disable core dumping for process 54996 (qdrouterd). Dec 2 04:07:35 localhost systemd-coredump[108652]: Process 54996 (qdrouterd) of user 42465 dumped core. Dec 2 04:07:35 localhost systemd[1]: systemd-coredump@0-108651-0.service: Deactivated successfully. Dec 2 04:07:35 localhost podman[108637]: 2025-12-02 09:07:35.040846662 +0000 UTC m=+0.239595192 container died 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 2 04:07:35 localhost systemd[1]: libpod-71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.scope: Deactivated successfully. Dec 2 04:07:35 localhost systemd[1]: libpod-71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.scope: Consumed 28.577s CPU time. Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.timer: Deactivated successfully. Dec 2 04:07:35 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5. Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Failed to open /run/systemd/transient/71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: No such file or directory Dec 2 04:07:35 localhost podman[108637]: 2025-12-02 09:07:35.087778935 +0000 UTC m=+0.286527465 container cleanup 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 2 04:07:35 localhost podman[108637]: metrics_qdr Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.timer: Failed to open /run/systemd/transient/71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.timer: No such file or directory Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Failed to open /run/systemd/transient/71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: No such file or directory Dec 2 04:07:35 localhost podman[108656]: 2025-12-02 09:07:35.12649905 +0000 UTC m=+0.071306575 container cleanup 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 2 04:07:35 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Dec 2 04:07:35 localhost systemd[1]: libpod-conmon-71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.scope: Deactivated successfully. Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.timer: Failed to open /run/systemd/transient/71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.timer: No such file or directory Dec 2 04:07:35 localhost systemd[1]: 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: Failed to open /run/systemd/transient/71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5.service: No such file or directory Dec 2 04:07:35 localhost podman[108670]: 2025-12-02 09:07:35.234025253 +0000 UTC m=+0.073522706 container cleanup 71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '36af2f1ef63ece3c88eb676f44e9c36d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044) Dec 2 04:07:35 localhost podman[108670]: metrics_qdr Dec 2 04:07:35 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Dec 2 04:07:35 localhost systemd[1]: Stopped metrics_qdr container. Dec 2 04:07:35 localhost systemd[1]: var-lib-containers-storage-overlay-083325a356d009687825873f5ef80d42d8ec3a9c9ef25c5a97dbce5b8f99fa32-merged.mount: Deactivated successfully. Dec 2 04:07:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71b0df469ac1667ac6e61c84f308440912da39a2a9f53c444b9100712adc10d5-userdata-shm.mount: Deactivated successfully. Dec 2 04:07:35 localhost python3.9[108775]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:36 localhost python3.9[108868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29879 DF PROTO=TCP SPT=32840 DPT=9102 SEQ=798519461 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780D1250000000001030307) Dec 2 04:07:37 localhost python3.9[108961]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:38 localhost python3.9[109054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:07:38 localhost systemd[1]: Reloading. Dec 2 04:07:38 localhost systemd-rc-local-generator[109073]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:07:38 localhost systemd-sysv-generator[109077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:07:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:07:38 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:07:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:07:39 localhost podman[109106]: Error: container 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 is not running Dec 2 04:07:39 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=125/n/a Dec 2 04:07:39 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:07:39 localhost sshd[109119]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:07:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30611 DF PROTO=TCP SPT=45150 DPT=9100 SEQ=1645113348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780DD240000000001030307) Dec 2 04:07:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:07:41 localhost podman[109121]: 2025-12-02 09:07:41.44076167 +0000 UTC m=+0.073866564 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:07:41 localhost podman[109121]: 2025-12-02 09:07:41.78883643 +0000 UTC m=+0.421941264 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 2 04:07:41 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47663 DF PROTO=TCP SPT=48864 DPT=9882 SEQ=1505906793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780E9240000000001030307) Dec 2 04:07:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:07:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:07:43 localhost podman[109145]: 2025-12-02 09:07:43.435386678 +0000 UTC m=+0.071680646 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044) Dec 2 04:07:43 localhost podman[109144]: 2025-12-02 09:07:43.493000497 +0000 UTC m=+0.130253551 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:07:43 localhost podman[109144]: 2025-12-02 09:07:43.50807933 +0000 UTC m=+0.145332354 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 04:07:43 localhost podman[109144]: unhealthy Dec 2 04:07:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:43 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:07:43 localhost podman[109145]: 2025-12-02 09:07:43.525027083 +0000 UTC m=+0.161321021 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 04:07:43 localhost podman[109145]: unhealthy Dec 2 04:07:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:07:43 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30613 DF PROTO=TCP SPT=45150 DPT=9100 SEQ=1645113348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4780F4E40000000001030307) Dec 2 04:07:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24307 DF PROTO=TCP SPT=46326 DPT=9105 SEQ=4168036757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478101E40000000001030307) Dec 2 04:07:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53321 DF PROTO=TCP SPT=49934 DPT=9101 SEQ=1691256137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47810CA00000000001030307) Dec 2 04:07:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53323 DF PROTO=TCP SPT=49934 DPT=9101 SEQ=1691256137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478118A40000000001030307) Dec 2 04:07:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53324 DF PROTO=TCP SPT=49934 DPT=9101 SEQ=1691256137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478128650000000001030307) Dec 2 04:08:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26138 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=4151817394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47813A5E0000000001030307) Dec 2 04:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6218 DF PROTO=TCP SPT=41124 DPT=9105 SEQ=3870507885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47813ADE0000000001030307) Dec 2 04:08:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26140 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=4151817394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478146640000000001030307) Dec 2 04:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:08:09 localhost podman[109184]: Error: container 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 is not running Dec 2 04:08:09 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Main process exited, code=exited, status=125/n/a Dec 2 04:08:09 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed with result 'exit-code'. Dec 2 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49490 DF PROTO=TCP SPT=41402 DPT=9882 SEQ=857368683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478151E40000000001030307) Dec 2 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:08:11 localhost podman[109196]: 2025-12-02 09:08:11.972423907 +0000 UTC m=+0.101310438 container health_status 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Dec 2 04:08:12 localhost podman[109196]: 2025-12-02 09:08:12.322032887 +0000 UTC m=+0.450919458 container exec_died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 2 04:08:12 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Deactivated successfully. Dec 2 04:08:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32722 DF PROTO=TCP SPT=37158 DPT=9100 SEQ=685219675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47815DE50000000001030307) Dec 2 04:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:08:14 localhost podman[109220]: 2025-12-02 09:08:14.195594941 +0000 UTC m=+0.083591335 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4) Dec 2 04:08:14 localhost podman[109219]: 2025-12-02 09:08:14.245652708 +0000 UTC m=+0.135637935 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true) Dec 2 04:08:14 localhost podman[109219]: 2025-12-02 09:08:14.259147878 +0000 UTC m=+0.149133115 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:08:14 localhost podman[109219]: unhealthy Dec 2 04:08:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:08:14 localhost podman[109220]: 2025-12-02 09:08:14.312285198 +0000 UTC m=+0.200281592 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 04:08:14 localhost podman[109220]: unhealthy Dec 2 04:08:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37133 DF PROTO=TCP SPT=39174 DPT=9100 SEQ=3855010009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47816A240000000001030307) Dec 2 04:08:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26142 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=4151817394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478175E50000000001030307) Dec 2 04:08:20 localhost podman[109094]: time="2025-12-02T09:08:20Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Dec 2 04:08:20 localhost systemd[1]: session-c11.scope: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: libpod-1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.scope: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: libpod-1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.scope: Consumed 35.934s CPU time. Dec 2 04:08:20 localhost podman[109094]: 2025-12-02 09:08:20.509898312 +0000 UTC m=+42.083247998 container died 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true) Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.timer: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1. Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed to open /run/systemd/transient/1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: No such file or directory Dec 2 04:08:20 localhost systemd[1]: tmp-crun.W5saBG.mount: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: var-lib-containers-storage-overlay-0399317fe788e77a051163f65a715baa05b56d1254753267f43144269e89c7fb-merged.mount: Deactivated successfully. Dec 2 04:08:20 localhost podman[109094]: 2025-12-02 09:08:20.57872597 +0000 UTC m=+42.152075646 container cleanup 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 2 04:08:20 localhost podman[109094]: nova_compute Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.timer: Failed to open /run/systemd/transient/1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.timer: No such file or directory Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed to open /run/systemd/transient/1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: No such file or directory Dec 2 04:08:20 localhost podman[109339]: 2025-12-02 09:08:20.656354464 +0000 UTC m=+0.135647675 container cleanup 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:08:20 localhost systemd[1]: libpod-conmon-1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.scope: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.timer: Failed to open /run/systemd/transient/1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.timer: No such file or directory Dec 2 04:08:20 localhost systemd[1]: 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: Failed to open /run/systemd/transient/1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1.service: No such file or directory Dec 2 04:08:20 localhost podman[109354]: 2025-12-02 09:08:20.753372596 +0000 UTC m=+0.067928516 container cleanup 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute) Dec 2 04:08:20 localhost podman[109354]: nova_compute Dec 2 04:08:20 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Dec 2 04:08:20 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:08:20 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.043s CPU time, no IO. Dec 2 04:08:21 localhost python3.9[109458]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:08:21 localhost systemd[1]: Reloading. Dec 2 04:08:21 localhost systemd-sysv-generator[109488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:08:21 localhost systemd-rc-local-generator[109484]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:08:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:08:21 localhost systemd[1]: Stopping nova_migration_target container... Dec 2 04:08:21 localhost systemd[1]: libpod-17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.scope: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: libpod-17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.scope: Consumed 33.739s CPU time. Dec 2 04:08:21 localhost podman[109499]: 2025-12-02 09:08:21.9314821 +0000 UTC m=+0.057452466 container died 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 2 04:08:21 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.timer: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159. Dec 2 04:08:21 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Failed to open /run/systemd/transient/17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: No such file or directory Dec 2 04:08:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159-userdata-shm.mount: Deactivated successfully. Dec 2 04:08:21 localhost systemd[1]: var-lib-containers-storage-overlay-aed02a8eef27d7fad5076c16a3501516599cfd6963ae4f4d75e8f0b164242bc5-merged.mount: Deactivated successfully. Dec 2 04:08:21 localhost podman[109499]: 2025-12-02 09:08:21.979578485 +0000 UTC m=+0.105548801 container cleanup 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true) Dec 2 04:08:21 localhost podman[109499]: nova_migration_target Dec 2 04:08:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.timer: Failed to open /run/systemd/transient/17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.timer: No such file or directory Dec 2 04:08:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Failed to open /run/systemd/transient/17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: No such file or directory Dec 2 04:08:22 localhost podman[109513]: 2025-12-02 09:08:22.005085096 +0000 UTC m=+0.062213183 container cleanup 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4) Dec 2 04:08:22 localhost systemd[1]: libpod-conmon-17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.scope: Deactivated successfully. Dec 2 04:08:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.timer: Failed to open /run/systemd/transient/17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.timer: No such file or directory Dec 2 04:08:22 localhost systemd[1]: 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: Failed to open /run/systemd/transient/17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159.service: No such file or directory Dec 2 04:08:22 localhost podman[109525]: 2025-12-02 09:08:22.117565692 +0000 UTC m=+0.070863875 container cleanup 17e1761fa5bc0f54a6d8d8ad155d5bbe472ffac96dffb76c1739ddef9d187159 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 2 04:08:22 localhost podman[109525]: nova_migration_target Dec 2 04:08:22 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Dec 2 04:08:22 localhost systemd[1]: Stopped nova_migration_target container. Dec 2 04:08:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58533 DF PROTO=TCP SPT=35196 DPT=9101 SEQ=1587794403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478181CF0000000001030307) Dec 2 04:08:24 localhost python3.9[109629]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:08:24 localhost systemd[1]: Reloading. Dec 2 04:08:24 localhost systemd-rc-local-generator[109653]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:08:24 localhost systemd-sysv-generator[109657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:08:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:08:24 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Dec 2 04:08:24 localhost systemd[1]: libpod-6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56.scope: Deactivated successfully. Dec 2 04:08:24 localhost podman[109670]: 2025-12-02 09:08:24.794556238 +0000 UTC m=+0.066221990 container died 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}) Dec 2 04:08:24 localhost podman[109670]: 2025-12-02 09:08:24.83955501 +0000 UTC m=+0.111220762 container cleanup 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, container_name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:08:24 localhost podman[109670]: nova_virtlogd_wrapper Dec 2 04:08:24 localhost podman[109685]: 2025-12-02 09:08:24.87810548 +0000 UTC m=+0.068142341 container cleanup 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtlogd_wrapper, vcs-type=git, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 2 04:08:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58535 DF PROTO=TCP SPT=35196 DPT=9101 SEQ=1587794403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47818DE50000000001030307) Dec 2 04:08:25 localhost systemd[1]: var-lib-containers-storage-overlay-adc9ccf45b0c7149995a619e9f57f17685eac5ade5b4374b2581744148a02996-merged.mount: Deactivated successfully. Dec 2 04:08:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56-userdata-shm.mount: Deactivated successfully. Dec 2 04:08:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58536 DF PROTO=TCP SPT=35196 DPT=9101 SEQ=1587794403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47819DA50000000001030307) Dec 2 04:08:30 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 04:08:30 localhost systemd[84191]: Activating special unit Exit the Session... Dec 2 04:08:30 localhost systemd[84191]: Removed slice User Background Tasks Slice. Dec 2 04:08:30 localhost systemd[84191]: Stopped target Main User Target. Dec 2 04:08:30 localhost systemd[84191]: Stopped target Basic System. Dec 2 04:08:30 localhost systemd[84191]: Stopped target Paths. Dec 2 04:08:30 localhost systemd[84191]: Stopped target Sockets. Dec 2 04:08:30 localhost systemd[84191]: Stopped target Timers. Dec 2 04:08:30 localhost systemd[84191]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:08:30 localhost systemd[84191]: Closed D-Bus User Message Bus Socket. Dec 2 04:08:30 localhost systemd[84191]: Stopped Create User's Volatile Files and Directories. Dec 2 04:08:30 localhost systemd[84191]: Removed slice User Application Slice. Dec 2 04:08:30 localhost systemd[84191]: Reached target Shutdown. Dec 2 04:08:30 localhost systemd[84191]: Finished Exit the Session. Dec 2 04:08:30 localhost systemd[84191]: Reached target Exit the Session. Dec 2 04:08:30 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 04:08:30 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 04:08:30 localhost systemd[1]: user@0.service: Consumed 4.288s CPU time, no IO. Dec 2 04:08:30 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 04:08:30 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 04:08:30 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 04:08:30 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 04:08:30 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 04:08:30 localhost systemd[1]: user-0.slice: Consumed 5.223s CPU time. Dec 2 04:08:33 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:08:33 localhost recover_tripleo_nova_virtqemud[109703]: 62312 Dec 2 04:08:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:08:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:08:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40307 DF PROTO=TCP SPT=39264 DPT=9102 SEQ=1737105334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781AF8D0000000001030307) Dec 2 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57399 DF PROTO=TCP SPT=51472 DPT=9105 SEQ=2724178224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781B00E0000000001030307) Dec 2 04:08:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40309 DF PROTO=TCP SPT=39264 DPT=9102 SEQ=1737105334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781BBA40000000001030307) Dec 2 04:08:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39696 DF PROTO=TCP SPT=54392 DPT=9100 SEQ=3246007098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781C7A50000000001030307) Dec 2 04:08:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26967 DF PROTO=TCP SPT=54930 DPT=9882 SEQ=2162550144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781D3A40000000001030307) Dec 2 04:08:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:08:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:08:44 localhost podman[109704]: 2025-12-02 09:08:44.450344716 +0000 UTC m=+0.087252583 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 2 04:08:44 localhost podman[109705]: 2025-12-02 09:08:44.493813726 +0000 UTC m=+0.130548439 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 04:08:44 localhost podman[109705]: 2025-12-02 09:08:44.512445895 +0000 UTC m=+0.149180628 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, container_name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 2 04:08:44 localhost podman[109705]: unhealthy Dec 2 04:08:44 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:44 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:08:44 localhost podman[109704]: 2025-12-02 09:08:44.568950264 +0000 UTC m=+0.205858201 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, tcib_managed=true) Dec 2 04:08:44 localhost podman[109704]: unhealthy Dec 2 04:08:44 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:08:44 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39698 DF PROTO=TCP SPT=54392 DPT=9100 SEQ=3246007098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781DF640000000001030307) Dec 2 04:08:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40311 DF PROTO=TCP SPT=39264 DPT=9102 SEQ=1737105334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781EBE50000000001030307) Dec 2 04:08:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22197 DF PROTO=TCP SPT=37708 DPT=9101 SEQ=1092242824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4781F6FF0000000001030307) Dec 2 04:08:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22199 DF PROTO=TCP SPT=37708 DPT=9101 SEQ=1092242824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478203250000000001030307) Dec 2 04:08:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22200 DF PROTO=TCP SPT=37708 DPT=9101 SEQ=1092242824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478212E40000000001030307) Dec 2 04:09:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=361 DF PROTO=TCP SPT=60578 DPT=9102 SEQ=62058449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478224BE0000000001030307) Dec 2 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4819 DF PROTO=TCP SPT=51496 DPT=9105 SEQ=1706879213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782253E0000000001030307) Dec 2 04:09:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=363 DF PROTO=TCP SPT=60578 DPT=9102 SEQ=62058449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478230E40000000001030307) Dec 2 04:09:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53129 DF PROTO=TCP SPT=32860 DPT=9100 SEQ=562697888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47823CA40000000001030307) Dec 2 04:09:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4989 DF PROTO=TCP SPT=53174 DPT=9882 SEQ=3182283504 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478248E40000000001030307) Dec 2 04:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:09:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:09:14 localhost podman[109746]: 2025-12-02 09:09:14.679873978 +0000 UTC m=+0.071229844 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ovn_metadata_agent) Dec 2 04:09:14 localhost podman[109746]: 2025-12-02 09:09:14.693750978 +0000 UTC m=+0.085106844 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:09:14 localhost podman[109746]: unhealthy Dec 2 04:09:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:14 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:09:14 localhost systemd[1]: tmp-crun.QdacN5.mount: Deactivated successfully. Dec 2 04:09:14 localhost podman[109747]: 2025-12-02 09:09:14.734558268 +0000 UTC m=+0.123093659 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team) Dec 2 04:09:14 localhost podman[109747]: 2025-12-02 09:09:14.747879725 +0000 UTC m=+0.136415096 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:09:14 localhost podman[109747]: unhealthy Dec 2 04:09:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:14 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:09:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53131 DF PROTO=TCP SPT=32860 DPT=9100 SEQ=562697888 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478254650000000001030307) Dec 2 04:09:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4823 DF PROTO=TCP SPT=51496 DPT=9105 SEQ=1706879213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478261E40000000001030307) Dec 2 04:09:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59482 DF PROTO=TCP SPT=45232 DPT=9101 SEQ=1911152021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47826C310000000001030307) Dec 2 04:09:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59484 DF PROTO=TCP SPT=45232 DPT=9101 SEQ=1911152021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478278250000000001030307) Dec 2 04:09:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59485 DF PROTO=TCP SPT=45232 DPT=9101 SEQ=1911152021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478287E40000000001030307) Dec 2 04:09:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8727 DF PROTO=TCP SPT=49944 DPT=9102 SEQ=934880506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478299ED0000000001030307) Dec 2 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6888 DF PROTO=TCP SPT=48004 DPT=9105 SEQ=811392733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47829A6F0000000001030307) Dec 2 04:09:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8729 DF PROTO=TCP SPT=49944 DPT=9102 SEQ=934880506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782A5E40000000001030307) Dec 2 04:09:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26970 DF PROTO=TCP SPT=54930 DPT=9882 SEQ=2162550144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782B1E40000000001030307) Dec 2 04:09:42 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 2 04:09:42 localhost recover_tripleo_nova_virtqemud[109914]: 62312 Dec 2 04:09:42 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 2 04:09:42 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 2 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39701 DF PROTO=TCP SPT=54392 DPT=9100 SEQ=3246007098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782BDE40000000001030307) Dec 2 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:09:44 localhost podman[109916]: 2025-12-02 09:09:44.955809938 +0000 UTC m=+0.089959414 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Dec 2 04:09:44 localhost podman[109916]: 2025-12-02 09:09:44.973946392 +0000 UTC m=+0.108095868 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vcs-type=git, config_id=tripleo_step4, version=17.1.12) Dec 2 04:09:44 localhost podman[109916]: unhealthy Dec 2 04:09:44 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:44 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:09:45 localhost podman[109915]: 2025-12-02 09:09:45.061711508 +0000 UTC m=+0.197258132 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 2 04:09:45 localhost podman[109915]: 2025-12-02 09:09:45.081092376 +0000 UTC m=+0.216638970 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git) Dec 2 04:09:45 localhost podman[109915]: unhealthy Dec 2 04:09:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:09:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11686 DF PROTO=TCP SPT=43486 DPT=9100 SEQ=2281320750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782C9A50000000001030307) Dec 2 04:09:49 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Dec 2 04:09:49 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61538 (conmon) with signal SIGKILL. Dec 2 04:09:49 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Dec 2 04:09:49 localhost systemd[1]: libpod-conmon-6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56.scope: Deactivated successfully. Dec 2 04:09:49 localhost podman[109967]: error opening file `/run/crun/6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56/status`: No such file or directory Dec 2 04:09:49 localhost podman[109955]: 2025-12-02 09:09:49.18294986 +0000 UTC m=+0.069988341 container cleanup 6e39cd661b9121bee9b5acf067e939fc9033a15f3a6537b3e8d8126e59e2dc56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, release=1761123044, container_name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 2 04:09:49 localhost podman[109955]: nova_virtlogd_wrapper Dec 2 04:09:49 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Dec 2 04:09:49 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Dec 2 04:09:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6892 DF PROTO=TCP SPT=48004 DPT=9105 SEQ=811392733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782D5E40000000001030307) Dec 2 04:09:49 localhost python3.9[110060]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:50 localhost systemd[1]: Reloading. Dec 2 04:09:50 localhost systemd-rc-local-generator[110090]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:50 localhost systemd-sysv-generator[110094]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:50 localhost systemd[1]: Stopping nova_virtnodedevd container... Dec 2 04:09:50 localhost systemd[1]: libpod-21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8.scope: Deactivated successfully. Dec 2 04:09:50 localhost systemd[1]: libpod-21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8.scope: Consumed 1.512s CPU time. Dec 2 04:09:50 localhost podman[110102]: 2025-12-02 09:09:50.507445715 +0000 UTC m=+0.086460341 container died 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, container_name=nova_virtnodedevd, url=https://www.redhat.com, config_id=tripleo_step3) Dec 2 04:09:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:50 localhost podman[110102]: 2025-12-02 09:09:50.544552776 +0000 UTC m=+0.123567352 container cleanup 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtnodedevd, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 04:09:50 localhost podman[110102]: nova_virtnodedevd Dec 2 04:09:50 localhost podman[110116]: 2025-12-02 09:09:50.598425455 +0000 UTC m=+0.069573790 container cleanup 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, tcib_managed=true, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z) Dec 2 04:09:50 localhost systemd[1]: libpod-conmon-21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8.scope: Deactivated successfully. Dec 2 04:09:50 localhost podman[110145]: error opening file `/run/crun/21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8/status`: No such file or directory Dec 2 04:09:50 localhost podman[110132]: 2025-12-02 09:09:50.694740609 +0000 UTC m=+0.066563440 container cleanup 21ba66c04209f456290730a004123ba7623872bc65a5bce1c6488aa3b8e487e8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_virtnodedevd, build-date=2025-11-19T00:35:22Z, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:09:50 localhost podman[110132]: nova_virtnodedevd Dec 2 04:09:50 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Dec 2 04:09:50 localhost systemd[1]: Stopped nova_virtnodedevd container. Dec 2 04:09:51 localhost python3.9[110238]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:51 localhost systemd[1]: var-lib-containers-storage-overlay-28a9a64287106d93235dbfecf490680361e8b3523afa4b3bf3ce2b25f0636261-merged.mount: Deactivated successfully. Dec 2 04:09:51 localhost systemd[1]: Reloading. Dec 2 04:09:51 localhost systemd-rc-local-generator[110263]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:51 localhost systemd-sysv-generator[110271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:51 localhost systemd[1]: Stopping nova_virtproxyd container... Dec 2 04:09:51 localhost systemd[1]: libpod-16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3.scope: Deactivated successfully. Dec 2 04:09:51 localhost podman[110279]: 2025-12-02 09:09:51.924698808 +0000 UTC m=+0.082719682 container died 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_virtproxyd, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, tcib_managed=true) Dec 2 04:09:51 localhost podman[110279]: 2025-12-02 09:09:51.966823893 +0000 UTC m=+0.124844717 container cleanup 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.expose-services=, container_name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 2 04:09:51 localhost podman[110279]: nova_virtproxyd Dec 2 04:09:52 localhost podman[110293]: 2025-12-02 09:09:52.00940361 +0000 UTC m=+0.069713963 container cleanup 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 04:09:52 localhost systemd[1]: libpod-conmon-16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3.scope: Deactivated successfully. Dec 2 04:09:52 localhost podman[110320]: error opening file `/run/crun/16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3/status`: No such file or directory Dec 2 04:09:52 localhost podman[110309]: 2025-12-02 09:09:52.116558643 +0000 UTC m=+0.076889485 container cleanup 16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12) Dec 2 04:09:52 localhost podman[110309]: nova_virtproxyd Dec 2 04:09:52 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Dec 2 04:09:52 localhost systemd[1]: Stopped nova_virtproxyd container. Dec 2 04:09:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52767 DF PROTO=TCP SPT=58890 DPT=9101 SEQ=809638464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782E1600000000001030307) Dec 2 04:09:52 localhost systemd[1]: var-lib-containers-storage-overlay-8867bb5d598ed9b36fb1a635e0a2434418503870d99edcec1a2221aec233d699-merged.mount: Deactivated successfully. Dec 2 04:09:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16d84a4ce4ba5ecc4f3c44b1a277cf4f1758563f4ced80b2826a6e0f692865d3-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:52 localhost python3.9[110413]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:09:52 localhost systemd[1]: Reloading. Dec 2 04:09:52 localhost systemd-sysv-generator[110443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:09:52 localhost systemd-rc-local-generator[110440]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:09:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:09:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Dec 2 04:09:53 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Dec 2 04:09:53 localhost systemd[1]: Stopping nova_virtqemud container... Dec 2 04:09:53 localhost systemd[1]: libpod-df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca.scope: Deactivated successfully. Dec 2 04:09:53 localhost systemd[1]: libpod-df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca.scope: Consumed 2.651s CPU time. Dec 2 04:09:53 localhost podman[110454]: 2025-12-02 09:09:53.253653251 +0000 UTC m=+0.078713284 container died df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 2 04:09:53 localhost podman[110454]: 2025-12-02 09:09:53.284501035 +0000 UTC m=+0.109561058 container cleanup df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true) Dec 2 04:09:53 localhost podman[110454]: nova_virtqemud Dec 2 04:09:53 localhost podman[110467]: 2025-12-02 09:09:53.343954683 +0000 UTC m=+0.073858723 container cleanup df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vcs-type=git, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, release=1761123044, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 2 04:09:53 localhost systemd[1]: var-lib-containers-storage-overlay-52524ff35057981b78caabbdad0990997b49d052172da58f45f8887febb8205f-merged.mount: Deactivated successfully. Dec 2 04:09:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca-userdata-shm.mount: Deactivated successfully. Dec 2 04:09:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52769 DF PROTO=TCP SPT=58890 DPT=9101 SEQ=809638464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782ED640000000001030307) Dec 2 04:09:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52770 DF PROTO=TCP SPT=58890 DPT=9101 SEQ=809638464 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4782FD240000000001030307) Dec 2 04:10:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24987 DF PROTO=TCP SPT=40144 DPT=9102 SEQ=2502057526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47830F1D0000000001030307) Dec 2 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64488 DF PROTO=TCP SPT=48226 DPT=9105 SEQ=2956584921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47830F9F0000000001030307) Dec 2 04:10:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24989 DF PROTO=TCP SPT=40144 DPT=9102 SEQ=2502057526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47831B240000000001030307) Dec 2 04:10:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37706 DF PROTO=TCP SPT=59590 DPT=9100 SEQ=2039090101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478327240000000001030307) Dec 2 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56110 DF PROTO=TCP SPT=46236 DPT=9882 SEQ=3997679653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478333240000000001030307) Dec 2 04:10:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:10:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:10:15 localhost systemd[1]: tmp-crun.ZaIpZY.mount: Deactivated successfully. Dec 2 04:10:15 localhost podman[110484]: 2025-12-02 09:10:15.454862999 +0000 UTC m=+0.089568844 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 2 04:10:15 localhost podman[110484]: 2025-12-02 09:10:15.496892931 +0000 UTC m=+0.131598746 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4) Dec 2 04:10:15 localhost podman[110484]: unhealthy Dec 2 04:10:15 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:10:15 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:10:15 localhost podman[110483]: 2025-12-02 09:10:15.499779059 +0000 UTC m=+0.136301352 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 2 04:10:15 localhost podman[110483]: 2025-12-02 09:10:15.584523822 +0000 UTC m=+0.221046185 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, release=1761123044, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 2 04:10:15 localhost podman[110483]: unhealthy Dec 2 04:10:15 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:10:15 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37708 DF PROTO=TCP SPT=59590 DPT=9100 SEQ=2039090101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47833EE40000000001030307) Dec 2 04:10:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24991 DF PROTO=TCP SPT=40144 DPT=9102 SEQ=2502057526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47834BE40000000001030307) Dec 2 04:10:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28309 DF PROTO=TCP SPT=38506 DPT=9101 SEQ=3649466983 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478356900000000001030307) Dec 2 04:10:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28311 DF PROTO=TCP SPT=38506 DPT=9101 SEQ=3649466983 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478362A40000000001030307) Dec 2 04:10:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28312 DF PROTO=TCP SPT=38506 DPT=9101 SEQ=3649466983 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478372640000000001030307) Dec 2 04:10:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63656 DF PROTO=TCP SPT=55352 DPT=9102 SEQ=3766022190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783844E0000000001030307) Dec 2 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51743 DF PROTO=TCP SPT=36780 DPT=9105 SEQ=2074142622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478384CF0000000001030307) Dec 2 04:10:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63658 DF PROTO=TCP SPT=55352 DPT=9102 SEQ=3766022190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478390640000000001030307) Dec 2 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50437 DF PROTO=TCP SPT=35628 DPT=9882 SEQ=1603742372 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47839BE50000000001030307) Dec 2 04:10:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11689 DF PROTO=TCP SPT=43486 DPT=9100 SEQ=2281320750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783A7E40000000001030307) Dec 2 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:10:45 localhost podman[110599]: 2025-12-02 09:10:45.670729845 +0000 UTC m=+0.063001333 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 2 04:10:45 localhost podman[110599]: 2025-12-02 09:10:45.686033855 +0000 UTC m=+0.078305393 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller) Dec 2 04:10:45 localhost podman[110599]: unhealthy Dec 2 04:10:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:10:45 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:10:45 localhost systemd[1]: tmp-crun.3yNkRa.mount: Deactivated successfully. Dec 2 04:10:45 localhost podman[110600]: 2025-12-02 09:10:45.739565085 +0000 UTC m=+0.122247607 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:10:45 localhost podman[110600]: 2025-12-02 09:10:45.753752464 +0000 UTC m=+0.136434986 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:10:45 localhost podman[110600]: unhealthy Dec 2 04:10:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:10:45 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19990 DF PROTO=TCP SPT=36268 DPT=9100 SEQ=3508387011 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783B3E40000000001030307) Dec 2 04:10:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63660 DF PROTO=TCP SPT=55352 DPT=9102 SEQ=3766022190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783BFE50000000001030307) Dec 2 04:10:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18150 DF PROTO=TCP SPT=45882 DPT=9101 SEQ=699891340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783CBC00000000001030307) Dec 2 04:10:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18152 DF PROTO=TCP SPT=45882 DPT=9101 SEQ=699891340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783D7E50000000001030307) Dec 2 04:10:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18153 DF PROTO=TCP SPT=45882 DPT=9101 SEQ=699891340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783E7A50000000001030307) Dec 2 04:11:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55004 DF PROTO=TCP SPT=41188 DPT=9102 SEQ=4215512189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783F97E0000000001030307) Dec 2 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17290 DF PROTO=TCP SPT=32972 DPT=9105 SEQ=3338811856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4783F9FF0000000001030307) Dec 2 04:11:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55006 DF PROTO=TCP SPT=41188 DPT=9102 SEQ=4215512189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478405A40000000001030307) Dec 2 04:11:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64817 DF PROTO=TCP SPT=57784 DPT=9100 SEQ=2509961656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478411640000000001030307) Dec 2 04:11:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2012 DF PROTO=TCP SPT=35046 DPT=9882 SEQ=851284200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47841DA40000000001030307) Dec 2 04:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:11:15 localhost podman[110640]: 2025-12-02 09:11:15.90849522 +0000 UTC m=+0.053068513 container health_status e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64) Dec 2 04:11:15 localhost podman[110639]: 2025-12-02 09:11:15.95841554 +0000 UTC m=+0.105022427 container health_status 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:11:15 localhost podman[110639]: 2025-12-02 09:11:15.968360352 +0000 UTC m=+0.114967229 container exec_died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:11:15 localhost podman[110639]: unhealthy Dec 2 04:11:15 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:11:15 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed with result 'exit-code'. Dec 2 04:11:15 localhost podman[110640]: 2025-12-02 09:11:15.989290946 +0000 UTC m=+0.133864269 container exec_died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Dec 2 04:11:15 localhost podman[110640]: unhealthy Dec 2 04:11:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:11:16 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed with result 'exit-code'. Dec 2 04:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64819 DF PROTO=TCP SPT=57784 DPT=9100 SEQ=2509961656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478429250000000001030307) Dec 2 04:11:17 localhost systemd[1]: tripleo_nova_virtqemud.service: State 'stop-sigterm' timed out. Killing. Dec 2 04:11:17 localhost systemd[1]: tripleo_nova_virtqemud.service: Killing process 62308 (conmon) with signal SIGKILL. Dec 2 04:11:17 localhost systemd[1]: tripleo_nova_virtqemud.service: Main process exited, code=killed, status=9/KILL Dec 2 04:11:17 localhost systemd[1]: libpod-conmon-df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca.scope: Deactivated successfully. Dec 2 04:11:17 localhost podman[110692]: error opening file `/run/crun/df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca/status`: No such file or directory Dec 2 04:11:17 localhost podman[110680]: 2025-12-02 09:11:17.440986877 +0000 UTC m=+0.076410601 container cleanup df9293835a969db4ed1da89ed49d9744a7c000abca2cb7f5672a9e3b2e5a79ca (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtqemud, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 2 04:11:17 localhost podman[110680]: nova_virtqemud Dec 2 04:11:17 localhost systemd[1]: tmp-crun.u6LS8a.mount: Deactivated successfully. Dec 2 04:11:17 localhost systemd[1]: tripleo_nova_virtqemud.service: Failed with result 'timeout'. Dec 2 04:11:17 localhost systemd[1]: Stopped nova_virtqemud container. Dec 2 04:11:18 localhost python3.9[110785]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:11:18 localhost systemd[1]: Reloading. Dec 2 04:11:18 localhost systemd-rc-local-generator[110812]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:11:18 localhost systemd-sysv-generator[110818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:11:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:11:19 localhost python3.9[110915]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:11:19 localhost systemd[1]: Reloading. Dec 2 04:11:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17294 DF PROTO=TCP SPT=32972 DPT=9105 SEQ=3338811856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478435E40000000001030307) Dec 2 04:11:19 localhost systemd-rc-local-generator[110941]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:11:19 localhost systemd-sysv-generator[110944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:11:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:11:19 localhost systemd[1]: Stopping nova_virtsecretd container... Dec 2 04:11:19 localhost systemd[1]: libpod-d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f.scope: Deactivated successfully. Dec 2 04:11:19 localhost podman[110956]: 2025-12-02 09:11:19.751406636 +0000 UTC m=+0.081567987 container died d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtsecretd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true) Dec 2 04:11:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f-userdata-shm.mount: Deactivated successfully. Dec 2 04:11:19 localhost podman[110956]: 2025-12-02 09:11:19.78790766 +0000 UTC m=+0.118068991 container cleanup d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 2 04:11:19 localhost podman[110956]: nova_virtsecretd Dec 2 04:11:19 localhost podman[110969]: 2025-12-02 09:11:19.846084048 +0000 UTC m=+0.082997825 container cleanup d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Dec 2 04:11:19 localhost systemd[1]: libpod-conmon-d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f.scope: Deactivated successfully. Dec 2 04:11:19 localhost podman[111002]: error opening file `/run/crun/d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f/status`: No such file or directory Dec 2 04:11:19 localhost podman[110990]: 2025-12-02 09:11:19.957795671 +0000 UTC m=+0.073000821 container cleanup d03ee59c7a667467d7894db6377f1c0920833c450a13535ae78eaa182412468f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, container_name=nova_virtsecretd, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 2 04:11:19 localhost podman[110990]: nova_virtsecretd Dec 2 04:11:19 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Dec 2 04:11:19 localhost systemd[1]: Stopped nova_virtsecretd container. Dec 2 04:11:20 localhost python3.9[111095]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:11:20 localhost systemd[1]: Reloading. Dec 2 04:11:20 localhost systemd-rc-local-generator[111124]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:11:20 localhost systemd-sysv-generator[111127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:11:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:11:20 localhost systemd[1]: var-lib-containers-storage-overlay-cf7a5c1891d67f42ad2a4e32b105c4405edd8c48c080a09b863da0e9425a915a-merged.mount: Deactivated successfully. Dec 2 04:11:20 localhost systemd[1]: Stopping nova_virtstoraged container... Dec 2 04:11:21 localhost systemd[1]: libpod-4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56.scope: Deactivated successfully. Dec 2 04:11:21 localhost podman[111135]: 2025-12-02 09:11:21.0837074 +0000 UTC m=+0.074240383 container died 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=1761123044) Dec 2 04:11:21 localhost podman[111135]: 2025-12-02 09:11:21.120907584 +0000 UTC m=+0.111440587 container cleanup 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_virtstoraged, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container) Dec 2 04:11:21 localhost podman[111135]: nova_virtstoraged Dec 2 04:11:21 localhost podman[111150]: 2025-12-02 09:11:21.160654245 +0000 UTC m=+0.066131750 container cleanup 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, release=1761123044) Dec 2 04:11:21 localhost systemd[1]: libpod-conmon-4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56.scope: Deactivated successfully. Dec 2 04:11:21 localhost podman[111178]: error opening file `/run/crun/4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56/status`: No such file or directory Dec 2 04:11:21 localhost podman[111166]: 2025-12-02 09:11:21.259750154 +0000 UTC m=+0.069253871 container cleanup 4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ff8ff724cb5f0d02131158e2fae849b6'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., container_name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 2 04:11:21 localhost podman[111166]: nova_virtstoraged Dec 2 04:11:21 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Dec 2 04:11:21 localhost systemd[1]: Stopped nova_virtstoraged container. Dec 2 04:11:21 localhost systemd[1]: var-lib-containers-storage-overlay-236dcb696b3f9eafb040a89b33b195e1413d367dc5825ca1d6228a54c8e9179b-merged.mount: Deactivated successfully. Dec 2 04:11:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a8634e5907beb37e61c16da6d777a6ee20926770ebd3c1322ad3f3d55924e56-userdata-shm.mount: Deactivated successfully. Dec 2 04:11:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12363 DF PROTO=TCP SPT=42942 DPT=9101 SEQ=4269477184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478440F00000000001030307) Dec 2 04:11:22 localhost python3.9[111271]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:11:22 localhost systemd[1]: Reloading. Dec 2 04:11:22 localhost systemd-rc-local-generator[111300]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:11:22 localhost systemd-sysv-generator[111304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:11:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:11:23 localhost systemd[1]: Stopping ovn_controller container... Dec 2 04:11:23 localhost systemd[1]: libpod-e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.scope: Deactivated successfully. Dec 2 04:11:23 localhost systemd[1]: libpod-e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.scope: Consumed 2.635s CPU time. Dec 2 04:11:23 localhost podman[111313]: 2025-12-02 09:11:23.189637363 +0000 UTC m=+0.105378325 container died e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.timer: Deactivated successfully. Dec 2 04:11:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b. Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed to open /run/systemd/transient/e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: No such file or directory Dec 2 04:11:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b-userdata-shm.mount: Deactivated successfully. Dec 2 04:11:23 localhost podman[111313]: 2025-12-02 09:11:23.288838336 +0000 UTC m=+0.204579288 container cleanup e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 2 04:11:23 localhost podman[111313]: ovn_controller Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.timer: Failed to open /run/systemd/transient/e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.timer: No such file or directory Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed to open /run/systemd/transient/e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: No such file or directory Dec 2 04:11:23 localhost podman[111327]: 2025-12-02 09:11:23.302703813 +0000 UTC m=+0.108777357 container cleanup e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, container_name=ovn_controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 2 04:11:23 localhost systemd[1]: libpod-conmon-e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.scope: Deactivated successfully. Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.timer: Failed to open /run/systemd/transient/e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.timer: No such file or directory Dec 2 04:11:23 localhost systemd[1]: e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: Failed to open /run/systemd/transient/e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b.service: No such file or directory Dec 2 04:11:23 localhost podman[111341]: 2025-12-02 09:11:23.394002335 +0000 UTC m=+0.066035476 container cleanup e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:11:23 localhost podman[111341]: ovn_controller Dec 2 04:11:23 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Dec 2 04:11:23 localhost systemd[1]: Stopped ovn_controller container. Dec 2 04:11:24 localhost python3.9[111473]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:11:24 localhost systemd[1]: var-lib-containers-storage-overlay-fa2735d70b4229c33d88157dc663cc996128839f7744195fee819ab923e68e6b-merged.mount: Deactivated successfully. Dec 2 04:11:24 localhost systemd[1]: Reloading. Dec 2 04:11:24 localhost systemd-sysv-generator[111529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:11:24 localhost systemd-rc-local-generator[111526]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:11:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:11:24 localhost systemd[1]: Stopping ovn_metadata_agent container... Dec 2 04:11:24 localhost systemd[1]: tmp-crun.LQ3zQe.mount: Deactivated successfully. Dec 2 04:11:24 localhost podman[111594]: 2025-12-02 09:11:24.729156766 +0000 UTC m=+0.097724034 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, name=rhceph, release=1763362218, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 2 04:11:24 localhost podman[111594]: 2025-12-02 09:11:24.861155435 +0000 UTC m=+0.229722723 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=) Dec 2 04:11:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12365 DF PROTO=TCP SPT=42942 DPT=9101 SEQ=4269477184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47844CE40000000001030307) Dec 2 04:11:25 localhost systemd[1]: libpod-1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.scope: Deactivated successfully. Dec 2 04:11:25 localhost systemd[1]: libpod-1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.scope: Consumed 11.221s CPU time. Dec 2 04:11:25 localhost podman[111558]: 2025-12-02 09:11:25.569441366 +0000 UTC m=+1.128673614 container died 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:11:25 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.timer: Deactivated successfully. Dec 2 04:11:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85. Dec 2 04:11:25 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed to open /run/systemd/transient/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: No such file or directory Dec 2 04:11:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85-userdata-shm.mount: Deactivated successfully. Dec 2 04:11:25 localhost systemd[1]: var-lib-containers-storage-overlay-3a1af3edb87ae84c24194878020e22370aba8355c75888d8a0972cd3b1ac86c8-merged.mount: Deactivated successfully. Dec 2 04:11:25 localhost podman[111558]: 2025-12-02 09:11:25.638432179 +0000 UTC m=+1.197664367 container cleanup 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:11:25 localhost podman[111558]: ovn_metadata_agent Dec 2 04:11:25 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.timer: Failed to open /run/systemd/transient/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.timer: No such file or directory Dec 2 04:11:25 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed to open /run/systemd/transient/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: No such file or directory Dec 2 04:11:25 localhost podman[111706]: 2025-12-02 09:11:25.677587964 +0000 UTC m=+0.094454326 container cleanup 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent) Dec 2 04:11:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12366 DF PROTO=TCP SPT=42942 DPT=9101 SEQ=4269477184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47845CA40000000001030307) Dec 2 04:11:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60259 DF PROTO=TCP SPT=58926 DPT=9102 SEQ=2733500089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47846EAD0000000001030307) Dec 2 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53741 DF PROTO=TCP SPT=41902 DPT=9105 SEQ=3207970519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47846F2E0000000001030307) Dec 2 04:11:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60261 DF PROTO=TCP SPT=58926 DPT=9102 SEQ=2733500089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47847AA50000000001030307) Dec 2 04:11:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46428 DF PROTO=TCP SPT=39210 DPT=9100 SEQ=2271897119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478486A40000000001030307) Dec 2 04:11:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19993 DF PROTO=TCP SPT=36268 DPT=9100 SEQ=3508387011 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478491E40000000001030307) Dec 2 04:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46430 DF PROTO=TCP SPT=39210 DPT=9100 SEQ=2271897119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47849E650000000001030307) Dec 2 04:11:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60263 DF PROTO=TCP SPT=58926 DPT=9102 SEQ=2733500089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784A9E40000000001030307) Dec 2 04:11:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12176 DF PROTO=TCP SPT=49436 DPT=9101 SEQ=1109354132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784B6200000000001030307) Dec 2 04:11:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12178 DF PROTO=TCP SPT=49436 DPT=9101 SEQ=1109354132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784C2240000000001030307) Dec 2 04:11:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12179 DF PROTO=TCP SPT=49436 DPT=9101 SEQ=1109354132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784D1E40000000001030307) Dec 2 04:12:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58636 DF PROTO=TCP SPT=44682 DPT=9102 SEQ=1860097015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784E3DD0000000001030307) Dec 2 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34983 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=465326508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784E45F0000000001030307) Dec 2 04:12:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58638 DF PROTO=TCP SPT=44682 DPT=9102 SEQ=1860097015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784EFE40000000001030307) Dec 2 04:12:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2015 DF PROTO=TCP SPT=35046 DPT=9882 SEQ=851284200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4784FBE40000000001030307) Dec 2 04:12:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64822 DF PROTO=TCP SPT=57784 DPT=9100 SEQ=2509961656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478507E50000000001030307) Dec 2 04:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5080 DF PROTO=TCP SPT=49934 DPT=9100 SEQ=3740161054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478513A50000000001030307) Dec 2 04:12:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34987 DF PROTO=TCP SPT=34808 DPT=9105 SEQ=465326508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47851FE40000000001030307) Dec 2 04:12:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38447 DF PROTO=TCP SPT=34622 DPT=9101 SEQ=460115161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47852B500000000001030307) Dec 2 04:12:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38449 DF PROTO=TCP SPT=34622 DPT=9101 SEQ=460115161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478537640000000001030307) Dec 2 04:12:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38450 DF PROTO=TCP SPT=34622 DPT=9101 SEQ=460115161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478547240000000001030307) Dec 2 04:12:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62581 DF PROTO=TCP SPT=45506 DPT=9102 SEQ=3218035444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785590D0000000001030307) Dec 2 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44227 DF PROTO=TCP SPT=50634 DPT=9105 SEQ=3074358929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785598E0000000001030307) Dec 2 04:12:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62583 DF PROTO=TCP SPT=45506 DPT=9102 SEQ=3218035444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478565250000000001030307) Dec 2 04:12:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18869 DF PROTO=TCP SPT=58378 DPT=9100 SEQ=4264979749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478571240000000001030307) Dec 2 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22643 DF PROTO=TCP SPT=33716 DPT=9882 SEQ=1764166890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47857D250000000001030307) Dec 2 04:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18871 DF PROTO=TCP SPT=58378 DPT=9100 SEQ=4264979749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478588E40000000001030307) Dec 2 04:12:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62585 DF PROTO=TCP SPT=45506 DPT=9102 SEQ=3218035444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478595E40000000001030307) Dec 2 04:12:49 localhost systemd[1]: tripleo_ovn_metadata_agent.service: State 'stop-sigterm' timed out. Killing. Dec 2 04:12:49 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Killing process 71612 (conmon) with signal SIGKILL. Dec 2 04:12:49 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Main process exited, code=killed, status=9/KILL Dec 2 04:12:49 localhost systemd[1]: libpod-conmon-1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.scope: Deactivated successfully. Dec 2 04:12:49 localhost systemd[1]: tmp-crun.rqpHrh.mount: Deactivated successfully. Dec 2 04:12:49 localhost podman[111845]: error opening file `/run/crun/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85/status`: No such file or directory Dec 2 04:12:49 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.timer: Failed to open /run/systemd/transient/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.timer: No such file or directory Dec 2 04:12:49 localhost systemd[1]: 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: Failed to open /run/systemd/transient/1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85.service: No such file or directory Dec 2 04:12:49 localhost podman[111832]: 2025-12-02 09:12:49.95265872 +0000 UTC m=+0.093987586 container cleanup 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:12:49 localhost podman[111832]: ovn_metadata_agent Dec 2 04:12:49 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Failed with result 'timeout'. Dec 2 04:12:49 localhost systemd[1]: Stopped ovn_metadata_agent container. Dec 2 04:12:50 localhost python3.9[111939]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:12:50 localhost systemd[1]: Reloading. Dec 2 04:12:50 localhost systemd-rc-local-generator[111964]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:12:50 localhost systemd-sysv-generator[111969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:12:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:12:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30901 DF PROTO=TCP SPT=55216 DPT=9101 SEQ=2707684008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785A0800000000001030307) Dec 2 04:12:52 localhost python3.9[112069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:53 localhost python3.9[112161]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:53 localhost python3.9[112253]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:54 localhost python3.9[112345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:54 localhost python3.9[112437]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30903 DF PROTO=TCP SPT=55216 DPT=9101 SEQ=2707684008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785ACA40000000001030307) Dec 2 04:12:55 localhost python3.9[112529]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:55 localhost python3.9[112621]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:56 localhost python3.9[112713]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:57 localhost python3.9[112805]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:57 localhost python3.9[112897]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:58 localhost python3.9[112989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:58 localhost python3.9[113081]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:59 localhost python3.9[113173]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:12:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30904 DF PROTO=TCP SPT=55216 DPT=9101 SEQ=2707684008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785BC640000000001030307) Dec 2 04:12:59 localhost python3.9[113265]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:00 localhost python3.9[113357]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:01 localhost python3.9[113449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:01 localhost python3.9[113541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:02 localhost python3.9[113633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:02 localhost python3.9[113725]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:03 localhost python3.9[113817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:03 localhost python3.9[113909]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35006 DF PROTO=TCP SPT=56412 DPT=9102 SEQ=3164046067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785CE3D0000000001030307) Dec 2 04:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18193 DF PROTO=TCP SPT=55084 DPT=9105 SEQ=3840888555 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785CEBE0000000001030307) Dec 2 04:13:05 localhost python3.9[114001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:05 localhost python3.9[114093]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:06 localhost python3.9[114185]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:06 localhost python3.9[114277]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35008 DF PROTO=TCP SPT=56412 DPT=9102 SEQ=3164046067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785DA640000000001030307) Dec 2 04:13:07 localhost python3.9[114369]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:07 localhost python3.9[114461]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:08 localhost python3.9[114553]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:08 localhost python3.9[114645]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:09 localhost python3.9[114737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:09 localhost python3.9[114829]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42760 DF PROTO=TCP SPT=40518 DPT=9882 SEQ=3640480416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785E5E40000000001030307) Dec 2 04:13:10 localhost python3.9[114921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:11 localhost python3.9[115013]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:11 localhost python3.9[115105]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:12 localhost python3.9[115197]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:12 localhost python3.9[115289]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5083 DF PROTO=TCP SPT=49934 DPT=9100 SEQ=3740161054 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785F1E40000000001030307) Dec 2 04:13:13 localhost python3.9[115381]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:13 localhost python3.9[115473]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:14 localhost python3.9[115565]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:14 localhost python3.9[115657]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:15 localhost python3.9[115749]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:15 localhost python3.9[115841]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61893 DF PROTO=TCP SPT=34570 DPT=9100 SEQ=3363060885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4785FDE40000000001030307) Dec 2 04:13:17 localhost python3.9[115933]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:17 localhost python3.9[116025]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:13:18 localhost python3.9[116117]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:13:18 localhost systemd[1]: Reloading. Dec 2 04:13:18 localhost systemd-rc-local-generator[116142]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:13:18 localhost systemd-sysv-generator[116145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:13:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:13:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35010 DF PROTO=TCP SPT=56412 DPT=9102 SEQ=3164046067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478609E50000000001030307) Dec 2 04:13:19 localhost python3.9[116246]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:20 localhost python3.9[116339]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:20 localhost python3.9[116432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:21 localhost python3.9[116525]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:21 localhost python3.9[116618]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4366 DF PROTO=TCP SPT=46736 DPT=9101 SEQ=3369090741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478615B00000000001030307) Dec 2 04:13:22 localhost python3.9[116711]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:23 localhost python3.9[116804]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:23 localhost python3.9[116897]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:24 localhost python3.9[116990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:24 localhost python3.9[117083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:25 localhost python3.9[117176]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4368 DF PROTO=TCP SPT=46736 DPT=9101 SEQ=3369090741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478621A40000000001030307) Dec 2 04:13:25 localhost python3.9[117269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:26 localhost python3.9[117362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:27 localhost python3.9[117455]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:27 localhost python3.9[117548]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:28 localhost python3.9[117641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:28 localhost python3.9[117734]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:29 localhost python3.9[117873]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4369 DF PROTO=TCP SPT=46736 DPT=9101 SEQ=3369090741 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478631650000000001030307) Dec 2 04:13:29 localhost python3.9[117983]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:30 localhost python3.9[118091]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:30 localhost python3.9[118184]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:32 localhost systemd[1]: session-38.scope: Deactivated successfully. Dec 2 04:13:32 localhost systemd[1]: session-38.scope: Consumed 48.358s CPU time. Dec 2 04:13:32 localhost systemd-logind[757]: Session 38 logged out. Waiting for processes to exit. Dec 2 04:13:32 localhost systemd-logind[757]: Removed session 38. Dec 2 04:13:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6338 DF PROTO=TCP SPT=50254 DPT=9102 SEQ=1784828263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786436D0000000001030307) Dec 2 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28424 DF PROTO=TCP SPT=40026 DPT=9105 SEQ=1371069876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478643EE0000000001030307) Dec 2 04:13:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6340 DF PROTO=TCP SPT=50254 DPT=9102 SEQ=1784828263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47864F640000000001030307) Dec 2 04:13:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20023 DF PROTO=TCP SPT=43446 DPT=9100 SEQ=1727308412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47865B640000000001030307) Dec 2 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45252 DF PROTO=TCP SPT=42452 DPT=9882 SEQ=808888576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478667650000000001030307) Dec 2 04:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20025 DF PROTO=TCP SPT=43446 DPT=9100 SEQ=1727308412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478673250000000001030307) Dec 2 04:13:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6342 DF PROTO=TCP SPT=50254 DPT=9102 SEQ=1784828263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47867FE40000000001030307) Dec 2 04:13:52 localhost sshd[118201]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:13:52 localhost systemd-logind[757]: New session 39 of user zuul. Dec 2 04:13:52 localhost systemd[1]: Started Session 39 of User zuul. Dec 2 04:13:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3088 DF PROTO=TCP SPT=47780 DPT=9101 SEQ=3477556958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47868AE00000000001030307) Dec 2 04:13:52 localhost python3.9[118294]: ansible-ansible.legacy.ping Invoked with data=pong Dec 2 04:13:54 localhost python3.9[118398]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:13:54 localhost python3.9[118490]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:13:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3090 DF PROTO=TCP SPT=47780 DPT=9101 SEQ=3477556958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478696E40000000001030307) Dec 2 04:13:55 localhost python3.9[118583]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:13:56 localhost python3.9[118675]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:57 localhost python3.9[118767]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:13:57 localhost python3.9[118840]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666836.8977585-178-220646490490930/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:13:58 localhost python3.9[118932]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:13:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3091 DF PROTO=TCP SPT=47780 DPT=9101 SEQ=3477556958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786A6A40000000001030307) Dec 2 04:13:59 localhost python3.9[119028]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:14:00 localhost python3.9[119120]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:14:01 localhost python3.9[119210]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:14:01 localhost network[119227]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:14:01 localhost network[119228]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:14:01 localhost network[119229]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:14:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:14:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37698 DF PROTO=TCP SPT=52042 DPT=9102 SEQ=1732293158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786B89E0000000001030307) Dec 2 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46409 DF PROTO=TCP SPT=52604 DPT=9105 SEQ=3146473158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786B91E0000000001030307) Dec 2 04:14:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37700 DF PROTO=TCP SPT=52042 DPT=9102 SEQ=1732293158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786C4A50000000001030307) Dec 2 04:14:08 localhost python3.9[119426]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:14:08 localhost python3.9[119516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:14:09 localhost python3.9[119612]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:14:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4276 DF PROTO=TCP SPT=33714 DPT=9100 SEQ=2178519232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786D0A40000000001030307) Dec 2 04:14:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61896 DF PROTO=TCP SPT=34570 DPT=9100 SEQ=3363060885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786DBE40000000001030307) Dec 2 04:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4278 DF PROTO=TCP SPT=33714 DPT=9100 SEQ=2178519232 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786E8860000000001030307) Dec 2 04:14:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37702 DF PROTO=TCP SPT=52042 DPT=9102 SEQ=1732293158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4786F3E40000000001030307) Dec 2 04:14:19 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:14:19 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:14:19 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:14:19 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:14:19 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:14:19 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:19 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:19 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:19 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:14:19 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:14:19 localhost sshd[119655]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:14:19 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:14:19 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:14:19 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:14:19 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:14:19 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:14:19 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:14:19 localhost systemd[1]: run-re073af74d5844759971be19ad57f6fae.service: Deactivated successfully. Dec 2 04:14:19 localhost systemd[1]: run-r5823f4815365465baed69fb2cb536815.service: Deactivated successfully. Dec 2 04:14:20 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:14:20 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:14:20 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:14:20 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:14:20 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:14:20 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:14:20 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:14:20 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:14:20 localhost sshd[119826]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:14:20 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:14:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55368 DF PROTO=TCP SPT=38298 DPT=9101 SEQ=4057788527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478700100000000001030307) Dec 2 04:14:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55370 DF PROTO=TCP SPT=38298 DPT=9101 SEQ=4057788527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47870C240000000001030307) Dec 2 04:14:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55371 DF PROTO=TCP SPT=38298 DPT=9101 SEQ=4057788527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47871BE40000000001030307) Dec 2 04:14:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49019 DF PROTO=TCP SPT=35002 DPT=9102 SEQ=1644456260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47872DCE0000000001030307) Dec 2 04:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35292 DF PROTO=TCP SPT=41226 DPT=9105 SEQ=3782781303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47872E4E0000000001030307) Dec 2 04:14:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49021 DF PROTO=TCP SPT=35002 DPT=9102 SEQ=1644456260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478739E50000000001030307) Dec 2 04:14:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6050 DF PROTO=TCP SPT=45524 DPT=9100 SEQ=2387660439 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478745A40000000001030307) Dec 2 04:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25741 DF PROTO=TCP SPT=48856 DPT=9882 SEQ=299773429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478751E40000000001030307) Dec 2 04:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6052 DF PROTO=TCP SPT=45524 DPT=9100 SEQ=2387660439 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47875D640000000001030307) Dec 2 04:14:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35296 DF PROTO=TCP SPT=41226 DPT=9105 SEQ=3782781303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478769E40000000001030307) Dec 2 04:14:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27554 DF PROTO=TCP SPT=53696 DPT=9101 SEQ=2980122975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787753F0000000001030307) Dec 2 04:14:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27556 DF PROTO=TCP SPT=53696 DPT=9101 SEQ=2980122975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478781660000000001030307) Dec 2 04:14:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27557 DF PROTO=TCP SPT=53696 DPT=9101 SEQ=2980122975 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478791240000000001030307) Dec 2 04:15:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5163 DF PROTO=TCP SPT=47828 DPT=9102 SEQ=2776328889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787A2FD0000000001030307) Dec 2 04:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49720 DF PROTO=TCP SPT=40520 DPT=9105 SEQ=4015283433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787A37F0000000001030307) Dec 2 04:15:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5165 DF PROTO=TCP SPT=47828 DPT=9102 SEQ=2776328889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787AF240000000001030307) Dec 2 04:15:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57468 DF PROTO=TCP SPT=36354 DPT=9100 SEQ=3821444339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787BAE50000000001030307) Dec 2 04:15:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38737 DF PROTO=TCP SPT=34998 DPT=9882 SEQ=296824685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787C7240000000001030307) Dec 2 04:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57470 DF PROTO=TCP SPT=36354 DPT=9100 SEQ=3821444339 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787D2A50000000001030307) Dec 2 04:15:17 localhost sshd[120299]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:17 localhost sshd[120300]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49724 DF PROTO=TCP SPT=40520 DPT=9105 SEQ=4015283433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787DFE40000000001030307) Dec 2 04:15:22 localhost sshd[120305]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:15:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42766 DF PROTO=TCP SPT=49826 DPT=9101 SEQ=3659717258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787EA700000000001030307) Dec 2 04:15:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42768 DF PROTO=TCP SPT=49826 DPT=9101 SEQ=3659717258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4787F6640000000001030307) Dec 2 04:15:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42769 DF PROTO=TCP SPT=49826 DPT=9101 SEQ=3659717258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478806250000000001030307) Dec 2 04:15:31 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=16 res=1 Dec 2 04:15:32 localhost kernel: SELinux: Converting 2754 SID table entries... Dec 2 04:15:32 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:15:32 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:15:32 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:15:32 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:15:32 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:15:32 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:15:32 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:15:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48811 DF PROTO=TCP SPT=46548 DPT=9102 SEQ=2783521122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788182D0000000001030307) Dec 2 04:15:34 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=17 res=1 Dec 2 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37572 DF PROTO=TCP SPT=50216 DPT=9105 SEQ=1091935778 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478818AE0000000001030307) Dec 2 04:15:34 localhost python3.9[120594]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:34 localhost python3.9[120686]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:15:35 localhost python3.9[120759]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666934.3913689-427-85164160909968/.source.fact _original_basename=.oir8p_a3 follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:36 localhost python3.9[120849]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:15:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48813 DF PROTO=TCP SPT=46548 DPT=9102 SEQ=2783521122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478824240000000001030307) Dec 2 04:15:37 localhost python3.9[120962]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:15:38 localhost python3.9[121016]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25744 DF PROTO=TCP SPT=48856 DPT=9882 SEQ=299773429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47882FE50000000001030307) Dec 2 04:15:41 localhost systemd[1]: Reloading. Dec 2 04:15:41 localhost systemd-rc-local-generator[121044]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:15:41 localhost systemd-sysv-generator[121051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:15:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:15:41 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:15:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6055 DF PROTO=TCP SPT=45524 DPT=9100 SEQ=2387660439 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47883BE40000000001030307) Dec 2 04:15:44 localhost python3.9[121156]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10755 DF PROTO=TCP SPT=51852 DPT=9100 SEQ=659875632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478847E40000000001030307) Dec 2 04:15:46 localhost python3.9[121395]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Dec 2 04:15:47 localhost python3.9[121487]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Dec 2 04:15:48 localhost python3.9[121580]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:49 localhost python3.9[121672]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Dec 2 04:15:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48815 DF PROTO=TCP SPT=46548 DPT=9102 SEQ=2783521122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478853E50000000001030307) Dec 2 04:15:51 localhost python3.9[121764]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:15:51 localhost python3.9[121856]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:15:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49472 DF PROTO=TCP SPT=46846 DPT=9101 SEQ=780407079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47885FA00000000001030307) Dec 2 04:15:52 localhost python3.9[121929]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764666951.1942809-751-252761449007449/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:15:53 localhost python3.9[122021]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:15:54 localhost python3.9[122115]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Dec 2 04:15:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49474 DF PROTO=TCP SPT=46846 DPT=9101 SEQ=780407079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47886BA40000000001030307) Dec 2 04:15:55 localhost python3.9[122208]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Dec 2 04:15:56 localhost python3.9[122301]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:15:57 localhost python3.9[122399]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Dec 2 04:15:58 localhost python3.9[122491]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:15:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49475 DF PROTO=TCP SPT=46846 DPT=9101 SEQ=780407079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47887B640000000001030307) Dec 2 04:16:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36594 DF PROTO=TCP SPT=53536 DPT=9102 SEQ=2979098041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47888D5D0000000001030307) Dec 2 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16070 DF PROTO=TCP SPT=36918 DPT=9105 SEQ=1378865609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47888DE00000000001030307) Dec 2 04:16:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36596 DF PROTO=TCP SPT=53536 DPT=9102 SEQ=2979098041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478899640000000001030307) Dec 2 04:16:08 localhost python3.9[122585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3585 DF PROTO=TCP SPT=40812 DPT=9100 SEQ=1480713733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788A5640000000001030307) Dec 2 04:16:12 localhost python3.9[122677]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:16:13 localhost python3.9[122750]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666972.1213596-1024-25856825055567/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41543 DF PROTO=TCP SPT=47632 DPT=9882 SEQ=4209715839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788B1640000000001030307) Dec 2 04:16:15 localhost python3.9[122842]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:16:15 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:16:15 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:16:15 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:16:15 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:16:15 localhost systemd-modules-load[122846]: Module 'msr' is built in Dec 2 04:16:15 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:16:16 localhost python3.9[122939]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3587 DF PROTO=TCP SPT=40812 DPT=9100 SEQ=1480713733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788BD250000000001030307) Dec 2 04:16:16 localhost python3.9[123012]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666975.6873782-1094-212866617272328/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:16:17 localhost python3.9[123104]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:16:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36598 DF PROTO=TCP SPT=53536 DPT=9102 SEQ=2979098041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788C9E40000000001030307) Dec 2 04:16:21 localhost python3.9[123196]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:16:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12345 DF PROTO=TCP SPT=45770 DPT=9101 SEQ=498264673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788D4D00000000001030307) Dec 2 04:16:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12347 DF PROTO=TCP SPT=45770 DPT=9101 SEQ=498264673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788E0E40000000001030307) Dec 2 04:16:26 localhost python3.9[123288]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 2 04:16:27 localhost python3.9[123378]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:16:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12348 DF PROTO=TCP SPT=45770 DPT=9101 SEQ=498264673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4788F0A40000000001030307) Dec 2 04:16:29 localhost python3.9[123470]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:29 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 2 04:16:29 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 2 04:16:29 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 2 04:16:29 localhost systemd[1]: tuned.service: Consumed 1.834s CPU time, no IO. Dec 2 04:16:29 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 2 04:16:30 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 2 04:16:32 localhost python3.9[123572]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 2 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34440 DF PROTO=TCP SPT=35194 DPT=9102 SEQ=3571250042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789028D0000000001030307) Dec 2 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51580 DF PROTO=TCP SPT=36988 DPT=9105 SEQ=2487867486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789030F0000000001030307) Dec 2 04:16:35 localhost python3.9[123664]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:35 localhost systemd[1]: Reloading. Dec 2 04:16:35 localhost systemd-rc-local-generator[123693]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:16:35 localhost systemd-sysv-generator[123696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:16:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:16:36 localhost python3.9[123823]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:16:36 localhost systemd[1]: Reloading. Dec 2 04:16:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34442 DF PROTO=TCP SPT=35194 DPT=9102 SEQ=3571250042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47890EA40000000001030307) Dec 2 04:16:37 localhost systemd-rc-local-generator[123876]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:16:37 localhost systemd-sysv-generator[123880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:16:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:16:38 localhost python3.9[124032]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:39 localhost python3.9[124125]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:39 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Dec 2 04:16:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62114 DF PROTO=TCP SPT=55758 DPT=9100 SEQ=4126746310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47891AA40000000001030307) Dec 2 04:16:40 localhost python3.9[124218]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:42 localhost python3.9[124332]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10758 DF PROTO=TCP SPT=51852 DPT=9100 SEQ=659875632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478925E40000000001030307) Dec 2 04:16:44 localhost python3.9[124425]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:16:45 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 2 04:16:45 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 2 04:16:45 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 2 04:16:45 localhost systemd[1]: Starting Apply Kernel Variables... Dec 2 04:16:45 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 2 04:16:45 localhost systemd[1]: Finished Apply Kernel Variables. Dec 2 04:16:45 localhost systemd[1]: session-39.scope: Deactivated successfully. Dec 2 04:16:45 localhost systemd[1]: session-39.scope: Consumed 1min 57.123s CPU time. Dec 2 04:16:45 localhost systemd-logind[757]: Session 39 logged out. Waiting for processes to exit. Dec 2 04:16:45 localhost systemd-logind[757]: Removed session 39. Dec 2 04:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62116 DF PROTO=TCP SPT=55758 DPT=9100 SEQ=4126746310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478932640000000001030307) Dec 2 04:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34444 DF PROTO=TCP SPT=35194 DPT=9102 SEQ=3571250042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47893DE40000000001030307) Dec 2 04:16:52 localhost sshd[124446]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:16:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22365 DF PROTO=TCP SPT=33142 DPT=9101 SEQ=371735630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47894A010000000001030307) Dec 2 04:16:52 localhost systemd-logind[757]: New session 40 of user zuul. Dec 2 04:16:52 localhost systemd[1]: Started Session 40 of User zuul. Dec 2 04:16:53 localhost python3.9[124539]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:55 localhost python3.9[124633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22367 DF PROTO=TCP SPT=33142 DPT=9101 SEQ=371735630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478956250000000001030307) Dec 2 04:16:56 localhost python3.9[124729]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:16:57 localhost python3.9[124820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:16:58 localhost python3.9[124916]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:16:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22368 DF PROTO=TCP SPT=33142 DPT=9101 SEQ=371735630 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478965E40000000001030307) Dec 2 04:16:59 localhost python3.9[124970]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:17:03 localhost python3.9[125064]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:17:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57118 DF PROTO=TCP SPT=59636 DPT=9102 SEQ=2262952419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478977BE0000000001030307) Dec 2 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26803 DF PROTO=TCP SPT=38970 DPT=9105 SEQ=3931880294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789783F0000000001030307) Dec 2 04:17:05 localhost python3.9[125219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:06 localhost python3.9[125311]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:17:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57120 DF PROTO=TCP SPT=59636 DPT=9102 SEQ=2262952419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478983E40000000001030307) Dec 2 04:17:07 localhost python3.9[125416]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:07 localhost python3.9[125464]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:08 localhost python3.9[125557]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:09 localhost python3.9[125630]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667028.1348813-324-192139157947119/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33945 DF PROTO=TCP SPT=41010 DPT=9100 SEQ=2615179156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47898FA40000000001030307) Dec 2 04:17:10 localhost python3.9[125722]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:10 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 2 04:17:10 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:17:10 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:17:10 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:17:10 localhost python3.9[125815]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:11 localhost python3.9[125907]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:12 localhost python3.9[125999]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:17:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27647 DF PROTO=TCP SPT=46044 DPT=9882 SEQ=2056543486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47899BE40000000001030307) Dec 2 04:17:13 localhost python3.9[126089]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:17:13 localhost python3.9[126183]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33947 DF PROTO=TCP SPT=41010 DPT=9100 SEQ=2615179156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789A7640000000001030307) Dec 2 04:17:17 localhost python3.9[126277]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57122 DF PROTO=TCP SPT=59636 DPT=9102 SEQ=2262952419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789B3E40000000001030307) Dec 2 04:17:22 localhost python3.9[126371]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61837 DF PROTO=TCP SPT=41094 DPT=9101 SEQ=2326005047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789BF2F0000000001030307) Dec 2 04:17:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61839 DF PROTO=TCP SPT=41094 DPT=9101 SEQ=2326005047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789CB240000000001030307) Dec 2 04:17:26 localhost python3.9[126471]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61840 DF PROTO=TCP SPT=41094 DPT=9101 SEQ=2326005047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789DAE40000000001030307) Dec 2 04:17:30 localhost python3.9[126565]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35409 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1314632820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789ECEE0000000001030307) Dec 2 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60828 DF PROTO=TCP SPT=32992 DPT=9105 SEQ=474927710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789ED6E0000000001030307) Dec 2 04:17:34 localhost python3.9[126659]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35411 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1314632820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4789F8E50000000001030307) Dec 2 04:17:39 localhost python3.9[126753]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:17:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57878 DF PROTO=TCP SPT=36960 DPT=9100 SEQ=3294421962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A04E50000000001030307) Dec 2 04:17:42 localhost podman[126889]: Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.56454445 +0000 UTC m=+0.082199293 container create 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:17:42 localhost systemd[1]: Started libpod-conmon-0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2.scope. Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.529741281 +0000 UTC m=+0.047396214 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:17:42 localhost systemd[1]: Started libcrun container. Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.652119366 +0000 UTC m=+0.169774219 container init 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, release=1763362218, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 2 04:17:42 localhost systemd[1]: tmp-crun.TpO52g.mount: Deactivated successfully. Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.665301263 +0000 UTC m=+0.182956106 container start 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7) Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.665790316 +0000 UTC m=+0.183445169 container attach 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z) Dec 2 04:17:42 localhost interesting_thompson[126909]: 167 167 Dec 2 04:17:42 localhost systemd[1]: libpod-0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2.scope: Deactivated successfully. Dec 2 04:17:42 localhost podman[126889]: 2025-12-02 09:17:42.66970222 +0000 UTC m=+0.187357103 container died 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhceph, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux , release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:17:42 localhost podman[126914]: 2025-12-02 09:17:42.776119863 +0000 UTC m=+0.092858605 container remove 0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_thompson, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_CLEAN=True, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Dec 2 04:17:42 localhost systemd[1]: libpod-conmon-0a7d769e135bdc49e57aeba28a6b7db501fa30678c8512d63b3db6043332eeb2.scope: Deactivated successfully. Dec 2 04:17:42 localhost podman[126937]: Dec 2 04:17:42 localhost podman[126937]: 2025-12-02 09:17:42.991935887 +0000 UTC m=+0.062825892 container create 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Dec 2 04:17:43 localhost systemd[1]: Started libpod-conmon-584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83.scope. Dec 2 04:17:43 localhost systemd[1]: Started libcrun container. Dec 2 04:17:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09380d250f89949106767e72a6b6f29b2ae1bf10c044a8bbf99723f4c9324e41/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09380d250f89949106767e72a6b6f29b2ae1bf10c044a8bbf99723f4c9324e41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09380d250f89949106767e72a6b6f29b2ae1bf10c044a8bbf99723f4c9324e41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:17:43 localhost podman[126937]: 2025-12-02 09:17:42.961666517 +0000 UTC m=+0.032556562 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:17:43 localhost podman[126937]: 2025-12-02 09:17:43.064379382 +0000 UTC m=+0.135269397 container init 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, name=rhceph, release=1763362218, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public) Dec 2 04:17:43 localhost podman[126937]: 2025-12-02 09:17:43.074593472 +0000 UTC m=+0.145483477 container start 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 2 04:17:43 localhost podman[126937]: 2025-12-02 09:17:43.074873809 +0000 UTC m=+0.145763844 container attach 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:17:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33396 DF PROTO=TCP SPT=41688 DPT=9882 SEQ=2959870546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A10E50000000001030307) Dec 2 04:17:43 localhost systemd[1]: var-lib-containers-storage-overlay-a223cc482315e6f5ca6ed7f89a417a449a6873319cee807130f2dbd9f2404549-merged.mount: Deactivated successfully. Dec 2 04:17:43 localhost beautiful_blackwell[126952]: [ Dec 2 04:17:43 localhost beautiful_blackwell[126952]: { Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "available": false, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "ceph_device": false, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "lsm_data": {}, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "lvs": [], Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "path": "/dev/sr0", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "rejected_reasons": [ Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "Insufficient space (<5GB)", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "Has a FileSystem" Dec 2 04:17:43 localhost beautiful_blackwell[126952]: ], Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "sys_api": { Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "actuators": null, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "device_nodes": "sr0", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "human_readable_size": "482.00 KB", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "id_bus": "ata", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "model": "QEMU DVD-ROM", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "nr_requests": "2", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "partitions": {}, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "path": "/dev/sr0", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "removable": "1", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "rev": "2.5+", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "ro": "0", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "rotational": "1", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "sas_address": "", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "sas_device_handle": "", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "scheduler_mode": "mq-deadline", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "sectors": 0, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "sectorsize": "2048", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "size": 493568.0, Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "support_discard": "0", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "type": "disk", Dec 2 04:17:43 localhost beautiful_blackwell[126952]: "vendor": "QEMU" Dec 2 04:17:43 localhost beautiful_blackwell[126952]: } Dec 2 04:17:43 localhost beautiful_blackwell[126952]: } Dec 2 04:17:43 localhost beautiful_blackwell[126952]: ] Dec 2 04:17:43 localhost systemd[1]: libpod-584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83.scope: Deactivated successfully. Dec 2 04:17:43 localhost podman[126937]: 2025-12-02 09:17:43.959741648 +0000 UTC m=+1.030631703 container died 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Dec 2 04:17:44 localhost systemd[1]: var-lib-containers-storage-overlay-09380d250f89949106767e72a6b6f29b2ae1bf10c044a8bbf99723f4c9324e41-merged.mount: Deactivated successfully. Dec 2 04:17:44 localhost podman[128377]: 2025-12-02 09:17:44.042649369 +0000 UTC m=+0.071703806 container remove 584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_blackwell, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux , release=1763362218, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 2 04:17:44 localhost systemd[1]: libpod-conmon-584cf8401d3577a809601fc10ecf5badb1bca48e7e859e3b601e9f6e8b6cfd83.scope: Deactivated successfully. Dec 2 04:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57880 DF PROTO=TCP SPT=36960 DPT=9100 SEQ=3294421962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A1CA40000000001030307) Dec 2 04:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35413 DF PROTO=TCP SPT=54814 DPT=9102 SEQ=1314632820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A29E40000000001030307) Dec 2 04:17:50 localhost python3.9[128565]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:51 localhost python3.9[128670]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:17:52 localhost python3.9[128743]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764667070.8009062-723-36403732161604/.source.json _original_basename=.q_s3xwg4 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:17:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58831 DF PROTO=TCP SPT=47950 DPT=9101 SEQ=281279711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A34600000000001030307) Dec 2 04:17:53 localhost python3.9[128835]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:17:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58833 DF PROTO=TCP SPT=47950 DPT=9101 SEQ=281279711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A40640000000001030307) Dec 2 04:17:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58834 DF PROTO=TCP SPT=47950 DPT=9101 SEQ=281279711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A50240000000001030307) Dec 2 04:18:00 localhost podman[128847]: 2025-12-02 09:17:53.266354978 +0000 UTC m=+0.039647759 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:18:02 localhost python3.9[129049]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49408 DF PROTO=TCP SPT=48156 DPT=9102 SEQ=768687781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A621D0000000001030307) Dec 2 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20611 DF PROTO=TCP SPT=40694 DPT=9105 SEQ=2119834166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A629F0000000001030307) Dec 2 04:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49410 DF PROTO=TCP SPT=48156 DPT=9102 SEQ=768687781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A6E240000000001030307) Dec 2 04:18:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27650 DF PROTO=TCP SPT=46044 DPT=9882 SEQ=2056543486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A79E50000000001030307) Dec 2 04:18:10 localhost systemd[1]: Starting dnf makecache... Dec 2 04:18:10 localhost dnf[129112]: Updating Subscription Management repositories. Dec 2 04:18:12 localhost podman[129061]: 2025-12-02 09:18:02.867597986 +0000 UTC m=+0.050744342 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:18:12 localhost dnf[129112]: Metadata cache refreshed recently. Dec 2 04:18:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33950 DF PROTO=TCP SPT=41010 DPT=9100 SEQ=2615179156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A85E50000000001030307) Dec 2 04:18:13 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 2 04:18:13 localhost systemd[1]: Finished dnf makecache. Dec 2 04:18:13 localhost systemd[1]: dnf-makecache.service: Consumed 2.546s CPU time. Dec 2 04:18:13 localhost python3.9[129263]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:14 localhost podman[129276]: 2025-12-02 09:18:13.401221417 +0000 UTC m=+0.029232963 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 2 04:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37301 DF PROTO=TCP SPT=57024 DPT=9100 SEQ=2549518513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A91E40000000001030307) Dec 2 04:18:16 localhost python3.9[129440]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:17 localhost podman[129453]: 2025-12-02 09:18:16.449342086 +0000 UTC m=+0.027142699 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:18:19 localhost python3.9[129616]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20615 DF PROTO=TCP SPT=40694 DPT=9105 SEQ=2119834166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478A9DE40000000001030307) Dec 2 04:18:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=436 DF PROTO=TCP SPT=47336 DPT=9101 SEQ=103032161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AA9900000000001030307) Dec 2 04:18:22 localhost podman[129630]: 2025-12-02 09:18:19.318044479 +0000 UTC m=+0.035934311 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 2 04:18:23 localhost python3.9[129809]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 2 04:18:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=438 DF PROTO=TCP SPT=47336 DPT=9101 SEQ=103032161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AB5A50000000001030307) Dec 2 04:18:25 localhost podman[129822]: 2025-12-02 09:18:23.656698488 +0000 UTC m=+0.044917658 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 2 04:18:27 localhost systemd[1]: session-40.scope: Deactivated successfully. Dec 2 04:18:27 localhost systemd[1]: session-40.scope: Consumed 1min 34.098s CPU time. Dec 2 04:18:27 localhost systemd-logind[757]: Session 40 logged out. Waiting for processes to exit. Dec 2 04:18:27 localhost systemd-logind[757]: Removed session 40. Dec 2 04:18:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=439 DF PROTO=TCP SPT=47336 DPT=9101 SEQ=103032161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AC5650000000001030307) Dec 2 04:18:33 localhost sshd[130353]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:18:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2138 DF PROTO=TCP SPT=36282 DPT=9102 SEQ=3021099205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AD74D0000000001030307) Dec 2 04:18:33 localhost systemd-logind[757]: New session 41 of user zuul. Dec 2 04:18:33 localhost systemd[1]: Started Session 41 of User zuul. Dec 2 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19656 DF PROTO=TCP SPT=57260 DPT=9105 SEQ=2171853980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AD7CE0000000001030307) Dec 2 04:18:36 localhost python3.9[130446]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2140 DF PROTO=TCP SPT=36282 DPT=9102 SEQ=3021099205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AE3650000000001030307) Dec 2 04:18:38 localhost python3.9[130542]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Dec 2 04:18:39 localhost python3.9[130635]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:18:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11567 DF PROTO=TCP SPT=53918 DPT=9100 SEQ=565826996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AEF240000000001030307) Dec 2 04:18:40 localhost python3.9[130689]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10262 DF PROTO=TCP SPT=45458 DPT=9882 SEQ=2592248336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478AFB640000000001030307) Dec 2 04:18:45 localhost python3.9[130813]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11569 DF PROTO=TCP SPT=53918 DPT=9100 SEQ=565826996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B06E40000000001030307) Dec 2 04:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19660 DF PROTO=TCP SPT=57260 DPT=9105 SEQ=2171853980 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B13E40000000001030307) Dec 2 04:18:49 localhost python3.9[130953]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:18:51 localhost python3.9[131046]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29055 DF PROTO=TCP SPT=58352 DPT=9101 SEQ=3716116661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B1EC00000000001030307) Dec 2 04:18:52 localhost python3.9[131138]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Dec 2 04:18:53 localhost kernel: SELinux: Converting 2756 SID table entries... Dec 2 04:18:53 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:18:53 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:18:53 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:18:53 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:18:53 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:18:53 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:18:53 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:18:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29057 DF PROTO=TCP SPT=58352 DPT=9101 SEQ=3716116661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B2AE40000000001030307) Dec 2 04:18:55 localhost python3.9[131233]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:18:56 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=18 res=1 Dec 2 04:18:56 localhost python3.9[131331]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:18:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29058 DF PROTO=TCP SPT=58352 DPT=9101 SEQ=3716116661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B3AA40000000001030307) Dec 2 04:19:02 localhost python3.9[131425]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:19:03 localhost python3.9[131670]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:19:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45912 DF PROTO=TCP SPT=36228 DPT=9102 SEQ=2758390445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B4C7E0000000001030307) Dec 2 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43444 DF PROTO=TCP SPT=40744 DPT=9105 SEQ=2968734845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B4CFE0000000001030307) Dec 2 04:19:04 localhost python3.9[131760]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:19:05 localhost python3.9[131854]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45914 DF PROTO=TCP SPT=36228 DPT=9102 SEQ=2758390445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B58A40000000001030307) Dec 2 04:19:09 localhost python3.9[131948]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60266 DF PROTO=TCP SPT=53026 DPT=9100 SEQ=461185822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B64640000000001030307) Dec 2 04:19:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37304 DF PROTO=TCP SPT=57024 DPT=9100 SEQ=2549518513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B6FE40000000001030307) Dec 2 04:19:13 localhost python3.9[132042]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:19:13 localhost systemd[1]: Reloading. Dec 2 04:19:13 localhost systemd-rc-local-generator[132068]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:19:13 localhost systemd-sysv-generator[132072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:19:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:19:14 localhost python3.9[132174]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:19:15 localhost python3.9[132266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60268 DF PROTO=TCP SPT=53026 DPT=9100 SEQ=461185822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B7C240000000001030307) Dec 2 04:19:16 localhost python3.9[132360]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:17 localhost python3.9[132452]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:17 localhost python3.9[132544]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:18 localhost python3.9[132617]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667157.455903-564-54375218309095/.source _original_basename=.d0hy13k0 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45916 DF PROTO=TCP SPT=36228 DPT=9102 SEQ=2758390445 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B87E40000000001030307) Dec 2 04:19:19 localhost python3.9[132709]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:20 localhost python3.9[132801]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Dec 2 04:19:20 localhost python3.9[132893]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:21 localhost python3.9[132985]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13548 DF PROTO=TCP SPT=49356 DPT=9101 SEQ=67537735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B93F00000000001030307) Dec 2 04:19:22 localhost python3.9[133058]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667161.1020353-690-105109381684194/.source.yaml _original_basename=.f3eku8ka follow=False checksum=06d744ebe702728c19f6d1a8f97158d086012058 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:23 localhost python3.9[133150]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Dec 2 04:19:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13550 DF PROTO=TCP SPT=49356 DPT=9101 SEQ=67537735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478B9FE40000000001030307) Dec 2 04:19:25 localhost ansible-async_wrapper.py[133255]: Invoked with j210078628763 300 /home/zuul/.ansible/tmp/ansible-tmp-1764667164.793688-762-239923763327776/AnsiballZ_edpm_os_net_config.py _ Dec 2 04:19:25 localhost ansible-async_wrapper.py[133258]: Starting module and watcher Dec 2 04:19:25 localhost ansible-async_wrapper.py[133258]: Start watching 133259 (300) Dec 2 04:19:25 localhost ansible-async_wrapper.py[133259]: Start module (133259) Dec 2 04:19:25 localhost ansible-async_wrapper.py[133255]: Return async_wrapper task started. Dec 2 04:19:25 localhost python3.9[133260]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Dec 2 04:19:26 localhost ansible-async_wrapper.py[133259]: Module complete (133259) Dec 2 04:19:29 localhost python3.9[133352]: ansible-ansible.legacy.async_status Invoked with jid=j210078628763.133255 mode=status _async_dir=/root/.ansible_async Dec 2 04:19:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13551 DF PROTO=TCP SPT=49356 DPT=9101 SEQ=67537735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BAFA40000000001030307) Dec 2 04:19:29 localhost python3.9[133411]: ansible-ansible.legacy.async_status Invoked with jid=j210078628763.133255 mode=cleanup _async_dir=/root/.ansible_async Dec 2 04:19:30 localhost ansible-async_wrapper.py[133258]: Done in kid B. Dec 2 04:19:30 localhost python3.9[133503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:31 localhost python3.9[133576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667170.1871982-828-73674880052935/.source.returncode _original_basename=.s_xqdy0a follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:31 localhost python3.9[133668]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:19:33 localhost python3.9[133741]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667171.4304514-876-138685012028521/.source.cfg _original_basename=.1slauv99 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:19:33 localhost python3.9[133833]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:19:33 localhost systemd[1]: Reloading Network Manager... Dec 2 04:19:33 localhost NetworkManager[5965]: [1764667173.8678] audit: op="reload" arg="0" pid=133837 uid=0 result="success" Dec 2 04:19:33 localhost NetworkManager[5965]: [1764667173.8690] config: signal: SIGHUP (no changes from disk) Dec 2 04:19:33 localhost systemd[1]: Reloaded Network Manager. Dec 2 04:19:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51444 DF PROTO=TCP SPT=49228 DPT=9102 SEQ=368648226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BC1AE0000000001030307) Dec 2 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12819 DF PROTO=TCP SPT=34848 DPT=9105 SEQ=1210016272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BC22E0000000001030307) Dec 2 04:19:34 localhost systemd[1]: session-41.scope: Deactivated successfully. Dec 2 04:19:34 localhost systemd[1]: session-41.scope: Consumed 36.390s CPU time. Dec 2 04:19:34 localhost systemd-logind[757]: Session 41 logged out. Waiting for processes to exit. Dec 2 04:19:34 localhost systemd-logind[757]: Removed session 41. Dec 2 04:19:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51446 DF PROTO=TCP SPT=49228 DPT=9102 SEQ=368648226 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BCDA40000000001030307) Dec 2 04:19:39 localhost sshd[133853]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:39 localhost systemd-logind[757]: New session 42 of user zuul. Dec 2 04:19:39 localhost systemd[1]: Started Session 42 of User zuul. Dec 2 04:19:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23825 DF PROTO=TCP SPT=42168 DPT=9100 SEQ=4064404138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BD9A40000000001030307) Dec 2 04:19:40 localhost python3.9[133946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:41 localhost python3.9[134040]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11931 DF PROTO=TCP SPT=34416 DPT=9882 SEQ=238813804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BE5A40000000001030307) Dec 2 04:19:44 localhost python3.9[134193]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:19:45 localhost systemd[1]: session-42.scope: Deactivated successfully. Dec 2 04:19:45 localhost systemd[1]: session-42.scope: Consumed 2.227s CPU time. Dec 2 04:19:45 localhost systemd-logind[757]: Session 42 logged out. Waiting for processes to exit. Dec 2 04:19:45 localhost systemd-logind[757]: Removed session 42. Dec 2 04:19:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23827 DF PROTO=TCP SPT=42168 DPT=9100 SEQ=4064404138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BF1640000000001030307) Dec 2 04:19:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12823 DF PROTO=TCP SPT=34848 DPT=9105 SEQ=1210016272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478BFDE50000000001030307) Dec 2 04:19:51 localhost sshd[134337]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:19:51 localhost systemd-logind[757]: New session 43 of user zuul. Dec 2 04:19:51 localhost systemd[1]: Started Session 43 of User zuul. Dec 2 04:19:52 localhost python3.9[134430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8244 DF PROTO=TCP SPT=39278 DPT=9101 SEQ=3243245265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C09200000000001030307) Dec 2 04:19:53 localhost python3.9[134524]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:19:54 localhost python3.9[134620]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:19:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8246 DF PROTO=TCP SPT=39278 DPT=9101 SEQ=3243245265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C15250000000001030307) Dec 2 04:19:55 localhost python3.9[134674]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:19:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8247 DF PROTO=TCP SPT=39278 DPT=9101 SEQ=3243245265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C24E50000000001030307) Dec 2 04:19:59 localhost python3.9[134768]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:01 localhost python3.9[134923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:02 localhost python3.9[135015]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:20:02 localhost python3.9[135120]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:03 localhost python3.9[135168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48692 DF PROTO=TCP SPT=48786 DPT=9102 SEQ=3515766592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C36DE0000000001030307) Dec 2 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53548 DF PROTO=TCP SPT=56554 DPT=9105 SEQ=3006067946 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C375F0000000001030307) Dec 2 04:20:04 localhost python3.9[135260]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:05 localhost python3.9[135308]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:06 localhost python3.9[135400]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48694 DF PROTO=TCP SPT=48786 DPT=9102 SEQ=3515766592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C42E40000000001030307) Dec 2 04:20:07 localhost python3.9[135492]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:07 localhost python3.9[135584]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:08 localhost python3.9[135676]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:09 localhost python3.9[135768]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:20:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37480 DF PROTO=TCP SPT=53030 DPT=9100 SEQ=1194465109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C4EE40000000001030307) Dec 2 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55643 DF PROTO=TCP SPT=35440 DPT=9882 SEQ=1911252482 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C5AE40000000001030307) Dec 2 04:20:14 localhost python3.9[135862]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:20:15 localhost python3.9[135956]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:20:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37482 DF PROTO=TCP SPT=53030 DPT=9100 SEQ=1194465109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C66A50000000001030307) Dec 2 04:20:16 localhost python3.9[136048]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:20:17 localhost python3.9[136140]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:20:18 localhost python3.9[136233]: ansible-service_facts Invoked Dec 2 04:20:18 localhost network[136250]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:20:18 localhost network[136251]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:20:18 localhost network[136252]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:20:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:20:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53552 DF PROTO=TCP SPT=56554 DPT=9105 SEQ=3006067946 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C73E40000000001030307) Dec 2 04:20:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8119 DF PROTO=TCP SPT=51554 DPT=9101 SEQ=373800939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C7E500000000001030307) Dec 2 04:20:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8121 DF PROTO=TCP SPT=51554 DPT=9101 SEQ=373800939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C8A640000000001030307) Dec 2 04:20:25 localhost python3.9[136573]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:20:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8122 DF PROTO=TCP SPT=51554 DPT=9101 SEQ=373800939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478C9A240000000001030307) Dec 2 04:20:30 localhost python3.9[136667]: ansible-package_facts Invoked with manager=['auto'] strategy=first Dec 2 04:20:32 localhost python3.9[136759]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:33 localhost python3.9[136834]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667231.8978214-658-100299384225247/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42353 DF PROTO=TCP SPT=40878 DPT=9102 SEQ=1386749150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CAC0D0000000001030307) Dec 2 04:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1098 DF PROTO=TCP SPT=54984 DPT=9105 SEQ=3549678416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CAC8E0000000001030307) Dec 2 04:20:34 localhost python3.9[136928]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:34 localhost python3.9[137003]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667233.407703-703-18285816425938/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:36 localhost python3.9[137097]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42355 DF PROTO=TCP SPT=40878 DPT=9102 SEQ=1386749150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CB8250000000001030307) Dec 2 04:20:38 localhost python3.9[137191]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:39 localhost python3.9[137245]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:20:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11934 DF PROTO=TCP SPT=34416 DPT=9882 SEQ=238813804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CC3E50000000001030307) Dec 2 04:20:41 localhost python3.9[137339]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:20:41 localhost python3.9[137393]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:20:41 localhost systemd[1]: Stopping NTP client/server... Dec 2 04:20:41 localhost chronyd[25712]: chronyd exiting Dec 2 04:20:41 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 2 04:20:41 localhost systemd[1]: Stopped NTP client/server. Dec 2 04:20:41 localhost systemd[1]: Starting NTP client/server... Dec 2 04:20:41 localhost chronyd[137402]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 2 04:20:41 localhost chronyd[137402]: Frequency -26.465 +/- 0.266 ppm read from /var/lib/chrony/drift Dec 2 04:20:41 localhost chronyd[137402]: Loaded seccomp filter (level 2) Dec 2 04:20:41 localhost systemd[1]: Started NTP client/server. Dec 2 04:20:42 localhost systemd[1]: session-43.scope: Deactivated successfully. Dec 2 04:20:42 localhost systemd[1]: session-43.scope: Consumed 28.564s CPU time. Dec 2 04:20:42 localhost systemd-logind[757]: Session 43 logged out. Waiting for processes to exit. Dec 2 04:20:42 localhost systemd-logind[757]: Removed session 43. Dec 2 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23830 DF PROTO=TCP SPT=42168 DPT=9100 SEQ=4064404138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CCFE50000000001030307) Dec 2 04:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39345 DF PROTO=TCP SPT=41560 DPT=9100 SEQ=3376193355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CDBE40000000001030307) Dec 2 04:20:47 localhost sshd[137418]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:20:48 localhost systemd-logind[757]: New session 44 of user zuul. Dec 2 04:20:48 localhost systemd[1]: Started Session 44 of User zuul. Dec 2 04:20:49 localhost python3.9[137511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:20:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42357 DF PROTO=TCP SPT=40878 DPT=9102 SEQ=1386749150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CE7E40000000001030307) Dec 2 04:20:50 localhost python3.9[137607]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:51 localhost python3.9[137712]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:51 localhost python3.9[137790]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.vnldosby recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25920 DF PROTO=TCP SPT=57488 DPT=9101 SEQ=205900510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CF3800000000001030307) Dec 2 04:20:52 localhost python3.9[137913]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:53 localhost python3.9[138003]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667252.019576-144-253399547474831/.source _original_basename=.z74m1wd5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:53 localhost python3.9[138095]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:54 localhost auditd[710]: Audit daemon rotating log files Dec 2 04:20:55 localhost python3.9[138187]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25922 DF PROTO=TCP SPT=57488 DPT=9101 SEQ=205900510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478CFFA50000000001030307) Dec 2 04:20:55 localhost python3.9[138260]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667254.6289587-216-13661394177182/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:56 localhost python3.9[138352]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:57 localhost python3.9[138425]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667255.623115-216-44223108629827/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:20:58 localhost python3.9[138517]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:58 localhost python3.9[138609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:20:59 localhost python3.9[138682]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667258.2117448-327-102065873164020/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:20:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25923 DF PROTO=TCP SPT=57488 DPT=9101 SEQ=205900510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D0F640000000001030307) Dec 2 04:20:59 localhost python3.9[138774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:00 localhost python3.9[138847]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667259.3473492-372-23090602061529/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:01 localhost python3.9[138939]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:21:01 localhost systemd[1]: Reloading. Dec 2 04:21:01 localhost systemd-sysv-generator[138965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:01 localhost systemd-rc-local-generator[138962]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:01 localhost systemd[1]: Reloading. Dec 2 04:21:01 localhost systemd-rc-local-generator[139002]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:01 localhost systemd-sysv-generator[139006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:02 localhost systemd[1]: Starting EDPM Container Shutdown... Dec 2 04:21:02 localhost systemd[1]: Finished EDPM Container Shutdown. Dec 2 04:21:02 localhost python3.9[139107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:03 localhost python3.9[139180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667262.2951865-441-231971350971928/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:03 localhost python3.9[139272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27965 DF PROTO=TCP SPT=58890 DPT=9102 SEQ=1000295349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D213D0000000001030307) Dec 2 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24456 DF PROTO=TCP SPT=34966 DPT=9105 SEQ=697319510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D21BE0000000001030307) Dec 2 04:21:05 localhost python3.9[139345]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667263.4492674-486-63915310449455/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:06 localhost python3.9[139437]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:21:06 localhost systemd[1]: Reloading. Dec 2 04:21:06 localhost systemd-sysv-generator[139466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:21:06 localhost systemd-rc-local-generator[139463]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:21:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:06 localhost systemd[1]: Starting Create netns directory... Dec 2 04:21:06 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:21:06 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:21:06 localhost systemd[1]: Finished Create netns directory. Dec 2 04:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27967 DF PROTO=TCP SPT=58890 DPT=9102 SEQ=1000295349 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D2D640000000001030307) Dec 2 04:21:08 localhost python3.9[139569]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:21:08 localhost network[139586]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:21:08 localhost network[139587]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:21:08 localhost network[139588]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:21:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:21:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59818 DF PROTO=TCP SPT=58568 DPT=9100 SEQ=829029851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D39240000000001030307) Dec 2 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32512 DF PROTO=TCP SPT=35890 DPT=9882 SEQ=1419328620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D45640000000001030307) Dec 2 04:21:13 localhost python3.9[139789]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:14 localhost python3.9[139864]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667273.498382-609-230974842590917/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:15 localhost python3.9[139957]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:21:15 localhost systemd[1]: Reloading OpenSSH server daemon... Dec 2 04:21:15 localhost systemd[1]: Reloaded OpenSSH server daemon. Dec 2 04:21:15 localhost sshd[119826]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59820 DF PROTO=TCP SPT=58568 DPT=9100 SEQ=829029851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D50E50000000001030307) Dec 2 04:21:17 localhost python3.9[140053]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:18 localhost python3.9[140145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:19 localhost python3.9[140218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667278.128267-702-108910026521099/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24460 DF PROTO=TCP SPT=34966 DPT=9105 SEQ=697319510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D5DE40000000001030307) Dec 2 04:21:20 localhost python3.9[140310]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 2 04:21:20 localhost systemd[1]: Starting Time & Date Service... Dec 2 04:21:20 localhost systemd[1]: Started Time & Date Service. Dec 2 04:21:20 localhost python3.9[140406]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:21 localhost python3.9[140499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:22 localhost python3.9[140572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667281.1775408-807-257800377320808/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51573 DF PROTO=TCP SPT=40064 DPT=9101 SEQ=1377099721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D68B00000000001030307) Dec 2 04:21:22 localhost python3.9[140664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:23 localhost python3.9[140737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667282.3402638-852-80149997879267/.source.yaml _original_basename=.eggbihc9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:23 localhost python3.9[140829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:24 localhost python3.9[140904]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667283.5345683-897-35753284200341/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51575 DF PROTO=TCP SPT=40064 DPT=9101 SEQ=1377099721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D74A40000000001030307) Dec 2 04:21:25 localhost python3.9[140996]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:26 localhost python3.9[141089]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:27 localhost python3[141182]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:21:28 localhost python3.9[141274]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:28 localhost python3.9[141347]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667287.636131-1014-228115917145281/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51576 DF PROTO=TCP SPT=40064 DPT=9101 SEQ=1377099721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D84650000000001030307) Dec 2 04:21:29 localhost python3.9[141439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:30 localhost python3.9[141512]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667289.0264647-1059-170007372066710/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:30 localhost python3.9[141604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:31 localhost python3.9[141677]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667290.297134-1104-232397804750560/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:31 localhost python3.9[141769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:32 localhost python3.9[141842]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667291.479275-1150-74123217406858/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:33 localhost python3.9[141934]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15219 DF PROTO=TCP SPT=59858 DPT=9102 SEQ=515532362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D966D0000000001030307) Dec 2 04:21:33 localhost python3.9[142007]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667292.6718388-1194-166597013808890/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52300 DF PROTO=TCP SPT=39974 DPT=9105 SEQ=1851322314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478D96EF0000000001030307) Dec 2 04:21:34 localhost python3.9[142099]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:35 localhost python3.9[142191]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:21:36 localhost python3.9[142286]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:36 localhost python3.9[142379]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:37 localhost python3.9[142471]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1104 DF PROTO=TCP SPT=54984 DPT=9105 SEQ=3549678416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478DA5E40000000001030307) Dec 2 04:21:38 localhost python3.9[142563]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 2 04:21:39 localhost python3.9[142656]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 2 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14171 DF PROTO=TCP SPT=45958 DPT=9882 SEQ=3463406250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478DADE40000000001030307) Dec 2 04:21:40 localhost systemd[1]: session-44.scope: Deactivated successfully. Dec 2 04:21:40 localhost systemd[1]: session-44.scope: Consumed 28.197s CPU time. Dec 2 04:21:40 localhost systemd-logind[757]: Session 44 logged out. Waiting for processes to exit. Dec 2 04:21:40 localhost systemd-logind[757]: Removed session 44. Dec 2 04:21:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39348 DF PROTO=TCP SPT=41560 DPT=9100 SEQ=3376193355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478DB9E50000000001030307) Dec 2 04:21:45 localhost sshd[142672]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:21:46 localhost systemd-logind[757]: New session 45 of user zuul. Dec 2 04:21:46 localhost systemd[1]: Started Session 45 of User zuul. Dec 2 04:21:46 localhost python3.9[142767]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 2 04:21:48 localhost python3.9[142859]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:21:50 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 2 04:21:50 localhost python3.9[142955]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Dec 2 04:21:52 localhost python3.9[143047]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.cabvjj22 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:21:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12255 DF PROTO=TCP SPT=40070 DPT=9101 SEQ=1751620103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478DDDE10000000001030307) Dec 2 04:21:52 localhost python3.9[143122]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.cabvjj22 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667311.584219-190-247129849797279/.source.cabvjj22 _original_basename=.rd_st6mg follow=False checksum=9674ae9a797ab88dd38896b99c4666372998fea7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:53 localhost podman[143240]: 2025-12-02 09:21:53.692925914 +0000 UTC m=+0.111924453 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218) Dec 2 04:21:53 localhost podman[143240]: 2025-12-02 09:21:53.806147271 +0000 UTC m=+0.225145770 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph) Dec 2 04:21:54 localhost python3.9[143429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:21:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25926 DF PROTO=TCP SPT=57488 DPT=9101 SEQ=205900510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478DEDE40000000001030307) Dec 2 04:21:56 localhost python3.9[143553]: ansible-ansible.builtin.blockinfile Invoked with block=np0005541914.localdomain,192.168.122.108,np0005541914* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=#012np0005541914.localdomain,192.168.122.108,np0005541914* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGWCSLJV2aPwMTfOaIZ+xjv1QFJPyldmo6H+V71SAll#012np0005541914.localdomain,192.168.122.108,np0005541914* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoWDrioobP7nWM6onZB+AZBuk/AQQ7zXxT58XHHnNVCXAZxKDdYUpn8CqfQBodfVNr1sWDyzBr0D5lMGYZypzo=#012np0005541909.localdomain,192.168.122.103,np0005541909* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=#012np0005541909.localdomain,192.168.122.103,np0005541909* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7RcuDge6wF/g+qZxY6m8WG6IEuMAvvdJQnnCjLs+Z1#012np0005541909.localdomain,192.168.122.103,np0005541909* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5sNXub2DBEGdchrrXonnWitouBamsCHQlfu1Eq48/u/VA5EJmoCHsMI/KSOMxMnSS+uUeGceHpl9AyeHtY2NU=#012np0005541910.localdomain,192.168.122.104,np0005541910* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=#012np0005541910.localdomain,192.168.122.104,np0005541910* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIx+QMGsIWmPvyCeFcRzy+Z3KrW6oIHjAujq2mTiluKE#012np0005541910.localdomain,192.168.122.104,np0005541910* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPiujdvwsNBrUjQMVBj6TBCEcpbfZIgHcCBzjuRUWPac2ltR7NNO2aF0KEDTH4F4qoWK7fw0fn0UFKuTrY4INV8=#012np0005541913.localdomain,192.168.122.107,np0005541913* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=#012np0005541913.localdomain,192.168.122.107,np0005541913* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEGKyrd1x8JIpNEVeXNPog2z4+Z1Gyh32lFLn9uh2H3I#012np0005541913.localdomain,192.168.122.107,np0005541913* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGOHjEHyYQ71qgLjQWD4LGL0rAKniN6cBK/Yx+b+dGqDveVXKGlkaXQOOfCp4GEX5fDI6bqBjCB02Ool/6wTT8=#012np0005541911.localdomain,192.168.122.105,np0005541911* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=#012np0005541911.localdomain,192.168.122.105,np0005541911* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILT7VjxC/vKVj4DmZTIjCQwrK+UN5wih4A5ddEFb5wLX#012np0005541911.localdomain,192.168.122.105,np0005541911* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJ5o8j1+/xDc8zMV2yChXY+U6nf1GT6sS3GGAkd+aR/6mUWuiQzjkFESsidYGPHaqz55q4REeXXQtW6T8mmqzU=#012np0005541912.localdomain,192.168.122.106,np0005541912* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=#012np0005541912.localdomain,192.168.122.106,np0005541912* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZZ0KsiMflqlnr0GTYoucjExbwZ18yPSOiSsfRMt90v#012np0005541912.localdomain,192.168.122.106,np0005541912* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGm4CXNWO0ZHMO4eJHc4n6NO7LQlY2+Ctp7F81Y3AEXQl3GIl2c/UCuL0O5ZJj6nEB654FSLAuOOifViFW8rlDc=#012 create=True mode=0644 path=/tmp/ansible.cabvjj22 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:21:58 localhost python3.9[143645]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.cabvjj22' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:00 localhost python3.9[143739]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.cabvjj22 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:01 localhost systemd[1]: session-45.scope: Deactivated successfully. Dec 2 04:22:01 localhost systemd[1]: session-45.scope: Consumed 4.216s CPU time. Dec 2 04:22:01 localhost systemd-logind[757]: Session 45 logged out. Waiting for processes to exit. Dec 2 04:22:01 localhost systemd-logind[757]: Removed session 45. Dec 2 04:22:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31968 DF PROTO=TCP SPT=42600 DPT=9102 SEQ=1946441030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E0BAD0000000001030307) Dec 2 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4054 DF PROTO=TCP SPT=42590 DPT=9105 SEQ=3265164041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E0C1F0000000001030307) Dec 2 04:22:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8300 DF PROTO=TCP SPT=50462 DPT=9882 SEQ=1476744479 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E13DC0000000001030307) Dec 2 04:22:07 localhost sshd[143754]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:07 localhost systemd-logind[757]: New session 46 of user zuul. Dec 2 04:22:07 localhost systemd[1]: Started Session 46 of User zuul. Dec 2 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50099 DF PROTO=TCP SPT=34458 DPT=9100 SEQ=3341923684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E1F8B0000000001030307) Dec 2 04:22:09 localhost python3.9[143847]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:10 localhost python3.9[143943]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:22:11 localhost python3.9[144037]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:22:12 localhost python3.9[144130]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:13 localhost python3.9[144223]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:13 localhost python3.9[144317]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:14 localhost python3.9[144412]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:15 localhost systemd-logind[757]: Session 46 logged out. Waiting for processes to exit. Dec 2 04:22:15 localhost systemd[1]: session-46.scope: Deactivated successfully. Dec 2 04:22:15 localhost systemd[1]: session-46.scope: Consumed 3.973s CPU time. Dec 2 04:22:15 localhost systemd-logind[757]: Removed session 46. Dec 2 04:22:21 localhost sshd[144427]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:21 localhost systemd-logind[757]: New session 47 of user zuul. Dec 2 04:22:21 localhost systemd[1]: Started Session 47 of User zuul. Dec 2 04:22:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12459 DF PROTO=TCP SPT=60278 DPT=9101 SEQ=3332007714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E530F0000000001030307) Dec 2 04:22:22 localhost python3.9[144520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12460 DF PROTO=TCP SPT=60278 DPT=9101 SEQ=3332007714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E57240000000001030307) Dec 2 04:22:23 localhost python3.9[144616]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:22:24 localhost python3.9[144670]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 2 04:22:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12461 DF PROTO=TCP SPT=60278 DPT=9101 SEQ=3332007714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E5F240000000001030307) Dec 2 04:22:28 localhost python3.9[144762]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:22:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12462 DF PROTO=TCP SPT=60278 DPT=9101 SEQ=3332007714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E6EE40000000001030307) Dec 2 04:22:30 localhost python3.9[144855]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:31 localhost python3.9[144947]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:32 localhost python3.9[145039]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:33 localhost python3.9[145129]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:22:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62073 DF PROTO=TCP SPT=43312 DPT=9102 SEQ=1471303209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E80CE0000000001030307) Dec 2 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19204 DF PROTO=TCP SPT=52262 DPT=9105 SEQ=1934263459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E814F0000000001030307) Dec 2 04:22:34 localhost python3.9[145219]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:34 localhost python3.9[145311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62074 DF PROTO=TCP SPT=43312 DPT=9102 SEQ=1471303209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E84E40000000001030307) Dec 2 04:22:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19205 DF PROTO=TCP SPT=52262 DPT=9105 SEQ=1934263459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E85640000000001030307) Dec 2 04:22:35 localhost systemd[1]: session-47.scope: Deactivated successfully. Dec 2 04:22:35 localhost systemd[1]: session-47.scope: Consumed 9.854s CPU time. Dec 2 04:22:35 localhost systemd-logind[757]: Session 47 logged out. Waiting for processes to exit. Dec 2 04:22:35 localhost systemd-logind[757]: Removed session 47. Dec 2 04:22:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64815 DF PROTO=TCP SPT=51430 DPT=9882 SEQ=2114928465 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E890C0000000001030307) Dec 2 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62075 DF PROTO=TCP SPT=43312 DPT=9102 SEQ=1471303209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E8CE40000000001030307) Dec 2 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64816 DF PROTO=TCP SPT=51430 DPT=9882 SEQ=2114928465 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E8D240000000001030307) Dec 2 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19206 DF PROTO=TCP SPT=52262 DPT=9105 SEQ=1934263459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E8D650000000001030307) Dec 2 04:22:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24045 DF PROTO=TCP SPT=46790 DPT=9100 SEQ=46980730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478E98E40000000001030307) Dec 2 04:22:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64818 DF PROTO=TCP SPT=51430 DPT=9882 SEQ=2114928465 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EA4E50000000001030307) Dec 2 04:22:43 localhost sshd[145326]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:22:43 localhost systemd-logind[757]: New session 48 of user zuul. Dec 2 04:22:43 localhost systemd[1]: Started Session 48 of User zuul. Dec 2 04:22:44 localhost python3.9[145419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24047 DF PROTO=TCP SPT=46790 DPT=9100 SEQ=46980730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EB0A40000000001030307) Dec 2 04:22:46 localhost python3.9[145515]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:47 localhost python3.9[145607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:48 localhost python3.9[145680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667366.7862518-182-237833545214715/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:48 localhost python3.9[145772]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62077 DF PROTO=TCP SPT=43312 DPT=9102 SEQ=1471303209 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EBDE40000000001030307) Dec 2 04:22:49 localhost python3.9[145864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:50 localhost python3.9[145937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667369.0751545-253-39627067167492/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:51 localhost python3.9[146029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:51 localhost chronyd[137402]: Selected source 162.159.200.123 (pool.ntp.org) Dec 2 04:22:51 localhost python3.9[146121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36387 DF PROTO=TCP SPT=59992 DPT=9101 SEQ=1923884833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EC8400000000001030307) Dec 2 04:22:52 localhost python3.9[146194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667371.2581842-324-253539572682250/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:53 localhost python3.9[146286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:53 localhost python3.9[146378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:54 localhost python3.9[146451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667373.2502956-392-32427731805147/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:55 localhost python3.9[146543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36389 DF PROTO=TCP SPT=59992 DPT=9101 SEQ=1923884833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478ED4640000000001030307) Dec 2 04:22:55 localhost python3.9[146635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:56 localhost python3.9[146739]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667375.2379694-464-190194588618507/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:56 localhost python3.9[146861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:57 localhost python3.9[146968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:58 localhost python3.9[147041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667377.0993915-534-211431127967123/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:22:58 localhost python3.9[147133]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:22:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36390 DF PROTO=TCP SPT=59992 DPT=9101 SEQ=1923884833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EE4240000000001030307) Dec 2 04:22:59 localhost python3.9[147225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:22:59 localhost python3.9[147298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667378.9759178-604-188095820318620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:00 localhost python3.9[147390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:01 localhost python3.9[147482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:02 localhost python3.9[147555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667381.0581806-670-115853626865740/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44924 DF PROTO=TCP SPT=51692 DPT=9102 SEQ=2714315369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EF5FD0000000001030307) Dec 2 04:23:03 localhost systemd[1]: session-48.scope: Deactivated successfully. Dec 2 04:23:03 localhost systemd[1]: session-48.scope: Consumed 12.104s CPU time. Dec 2 04:23:03 localhost systemd-logind[757]: Session 48 logged out. Waiting for processes to exit. Dec 2 04:23:03 localhost systemd-logind[757]: Removed session 48. Dec 2 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6054 DF PROTO=TCP SPT=34474 DPT=9105 SEQ=2347424361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478EF67F0000000001030307) Dec 2 04:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44926 DF PROTO=TCP SPT=51692 DPT=9102 SEQ=2714315369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F02240000000001030307) Dec 2 04:23:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11815 DF PROTO=TCP SPT=38638 DPT=9100 SEQ=3358605897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F0DE40000000001030307) Dec 2 04:23:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27888 DF PROTO=TCP SPT=57682 DPT=9882 SEQ=1675163659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F1A240000000001030307) Dec 2 04:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11817 DF PROTO=TCP SPT=38638 DPT=9100 SEQ=3358605897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F25A50000000001030307) Dec 2 04:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6058 DF PROTO=TCP SPT=34474 DPT=9105 SEQ=2347424361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F31E40000000001030307) Dec 2 04:23:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32648 DF PROTO=TCP SPT=58712 DPT=9101 SEQ=3898644437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F3D700000000001030307) Dec 2 04:23:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32650 DF PROTO=TCP SPT=58712 DPT=9101 SEQ=3898644437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F49640000000001030307) Dec 2 04:23:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32651 DF PROTO=TCP SPT=58712 DPT=9101 SEQ=3898644437 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F59240000000001030307) Dec 2 04:23:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11928 DF PROTO=TCP SPT=57130 DPT=9102 SEQ=1338226220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F6B2D0000000001030307) Dec 2 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2690 DF PROTO=TCP SPT=59382 DPT=9105 SEQ=1675744302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F6BAF0000000001030307) Dec 2 04:23:35 localhost sshd[147571]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:23:35 localhost systemd-logind[757]: New session 49 of user zuul. Dec 2 04:23:35 localhost systemd[1]: Started Session 49 of User zuul. Dec 2 04:23:36 localhost python3.9[147666]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11930 DF PROTO=TCP SPT=57130 DPT=9102 SEQ=1338226220 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F77240000000001030307) Dec 2 04:23:37 localhost python3.9[147758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:38 localhost python3.9[147831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667416.8278933-63-246619730869614/.source.conf _original_basename=ceph.conf follow=False checksum=bb050c8012c4b6ce73dbd1d555a91a361a703a4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:38 localhost python3.9[147923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:23:39 localhost python3.9[147996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667418.1812358-63-26626235553072/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:39 localhost systemd[1]: session-49.scope: Deactivated successfully. Dec 2 04:23:39 localhost systemd[1]: session-49.scope: Consumed 2.272s CPU time. Dec 2 04:23:39 localhost systemd-logind[757]: Session 49 logged out. Waiting for processes to exit. Dec 2 04:23:39 localhost systemd-logind[757]: Removed session 49. Dec 2 04:23:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53855 DF PROTO=TCP SPT=37604 DPT=9100 SEQ=2804230835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F83240000000001030307) Dec 2 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52376 DF PROTO=TCP SPT=40320 DPT=9882 SEQ=2329003095 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F8F240000000001030307) Dec 2 04:23:45 localhost sshd[148012]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:23:45 localhost systemd-logind[757]: New session 50 of user zuul. Dec 2 04:23:45 localhost systemd[1]: Started Session 50 of User zuul. Dec 2 04:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53857 DF PROTO=TCP SPT=37604 DPT=9100 SEQ=2804230835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478F9AE50000000001030307) Dec 2 04:23:46 localhost python3.9[148105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:23:47 localhost python3.9[148201]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:48 localhost python3.9[148293]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:23:48 localhost python3.9[148383]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2694 DF PROTO=TCP SPT=59382 DPT=9105 SEQ=1675744302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FA7E50000000001030307) Dec 2 04:23:50 localhost python3.9[148475]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 04:23:52 localhost python3.9[148567]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:23:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43256 DF PROTO=TCP SPT=59866 DPT=9101 SEQ=553843952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FB29F0000000001030307) Dec 2 04:23:53 localhost python3.9[148621]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:23:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43258 DF PROTO=TCP SPT=59866 DPT=9101 SEQ=553843952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FBEA40000000001030307) Dec 2 04:23:57 localhost python3.9[148715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:23:58 localhost python3[148871]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Dec 2 04:23:58 localhost python3.9[148978]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:23:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43259 DF PROTO=TCP SPT=59866 DPT=9101 SEQ=553843952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FCE650000000001030307) Dec 2 04:23:59 localhost python3.9[149070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:00 localhost python3.9[149118]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:01 localhost python3.9[149210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:01 localhost python3.9[149258]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rtbs0qex recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:02 localhost python3.9[149350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:03 localhost python3.9[149398]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43689 DF PROTO=TCP SPT=42948 DPT=9102 SEQ=538564102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FE05E0000000001030307) Dec 2 04:24:03 localhost python3.9[149490]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46099 DF PROTO=TCP SPT=33754 DPT=9105 SEQ=3098192455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FE0DF0000000001030307) Dec 2 04:24:05 localhost python3[149583]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:24:06 localhost python3.9[149675]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:06 localhost python3.9[149750]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667445.824281-432-109213117566183/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43691 DF PROTO=TCP SPT=42948 DPT=9102 SEQ=538564102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FEC650000000001030307) Dec 2 04:24:07 localhost python3.9[149842]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:08 localhost python3.9[149917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667447.1232858-477-260009033598817/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:09 localhost python3.9[150009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:09 localhost python3.9[150084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667448.3171823-522-100340655239830/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27891 DF PROTO=TCP SPT=57682 DPT=9882 SEQ=1675163659 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A478FF7E40000000001030307) Dec 2 04:24:11 localhost python3.9[150176]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:11 localhost python3.9[150251]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667449.932977-567-153691141035749/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:12 localhost python3.9[150343]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11820 DF PROTO=TCP SPT=38638 DPT=9100 SEQ=3358605897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479003E40000000001030307) Dec 2 04:24:13 localhost python3.9[150418]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667451.8318982-612-132805547251105/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:14 localhost python3.9[150510]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:14 localhost python3.9[150602]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:15 localhost python3.9[150697]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:16 localhost python3.9[150789]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53873 DF PROTO=TCP SPT=55440 DPT=9100 SEQ=2759797587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479010240000000001030307) Dec 2 04:24:16 localhost python3.9[150882]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:17 localhost python3.9[150976]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:18 localhost python3.9[151071]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43693 DF PROTO=TCP SPT=42948 DPT=9102 SEQ=538564102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47901BE50000000001030307) Dec 2 04:24:19 localhost python3.9[151161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:24:20 localhost python3.9[151254]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005541913.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:80:ac:27:10" external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:20 localhost ovs-vsctl[151255]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005541913.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:80:ac:27:10 external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Dec 2 04:24:21 localhost python3.9[151347]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49273 DF PROTO=TCP SPT=32970 DPT=9101 SEQ=228231276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479027D00000000001030307) Dec 2 04:24:22 localhost python3.9[151440]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:24 localhost python3.9[151534]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:24 localhost python3.9[151626]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49275 DF PROTO=TCP SPT=32970 DPT=9101 SEQ=228231276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479033E40000000001030307) Dec 2 04:24:25 localhost python3.9[151674]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:26 localhost python3.9[151766]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:26 localhost python3.9[151814]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:27 localhost python3.9[151906]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:27 localhost python3.9[151998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:28 localhost python3.9[152046]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:28 localhost python3.9[152138]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:29 localhost python3.9[152186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49276 DF PROTO=TCP SPT=32970 DPT=9101 SEQ=228231276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479043A40000000001030307) Dec 2 04:24:29 localhost python3.9[152278]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:29 localhost systemd[1]: Reloading. Dec 2 04:24:30 localhost systemd-rc-local-generator[152299]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:30 localhost systemd-sysv-generator[152302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:31 localhost python3.9[152407]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:32 localhost python3.9[152455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:33 localhost python3.9[152547]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:33 localhost python3.9[152595]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28061 DF PROTO=TCP SPT=42486 DPT=9102 SEQ=1993625355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790558D0000000001030307) Dec 2 04:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5165 DF PROTO=TCP SPT=45900 DPT=9105 SEQ=587892334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790560F0000000001030307) Dec 2 04:24:34 localhost python3.9[152687]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:34 localhost systemd[1]: Reloading. Dec 2 04:24:34 localhost systemd-sysv-generator[152713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:34 localhost systemd-rc-local-generator[152709]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:35 localhost systemd[1]: Starting Create netns directory... Dec 2 04:24:35 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:24:35 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:24:35 localhost systemd[1]: Finished Create netns directory. Dec 2 04:24:36 localhost python3.9[152820]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:36 localhost python3.9[152912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28063 DF PROTO=TCP SPT=42486 DPT=9102 SEQ=1993625355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479061A40000000001030307) Dec 2 04:24:37 localhost python3.9[152985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667476.268329-1344-229882260860214/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:38 localhost python3.9[153077]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:24:38 localhost python3.9[153169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:24:39 localhost python3.9[153244]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667478.5515296-1419-120214077552658/.source.json _original_basename=.9j20tbq7 follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47126 DF PROTO=TCP SPT=59134 DPT=9100 SEQ=540302236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47906DA40000000001030307) Dec 2 04:24:40 localhost python3.9[153336]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:42 localhost python3.9[153593]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Dec 2 04:24:43 localhost python3.9[153685]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:24:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13043 DF PROTO=TCP SPT=60766 DPT=9882 SEQ=722403974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479079A40000000001030307) Dec 2 04:24:44 localhost python3.9[153777]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47128 DF PROTO=TCP SPT=59134 DPT=9100 SEQ=540302236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479085640000000001030307) Dec 2 04:24:48 localhost python3[153896]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:24:48 localhost python3[153896]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012 "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:38:47.246477714Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345722821,#012 "VirtualSize": 345722821,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012 "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Dec 2 04:24:48 localhost podman[153947]: 2025-12-02 09:24:48.486539178 +0000 UTC m=+0.061695866 container remove e7eecdc150df71e60e827f323f5c39356b385a1381d6f79abb35c9f2026b626b (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 2 04:24:48 localhost python3[153896]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Dec 2 04:24:48 localhost podman[153961]: Dec 2 04:24:48 localhost podman[153961]: 2025-12-02 09:24:48.550597866 +0000 UTC m=+0.050724052 container create cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:24:48 localhost podman[153961]: 2025-12-02 09:24:48.526322435 +0000 UTC m=+0.026448651 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:24:48 localhost python3[153896]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 2 04:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5169 DF PROTO=TCP SPT=45900 DPT=9105 SEQ=587892334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479091E40000000001030307) Dec 2 04:24:49 localhost python3.9[154091]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:50 localhost python3.9[154185]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:50 localhost python3.9[154231]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:24:51 localhost python3.9[154322]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667490.5944312-1683-175000602378591/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:24:51 localhost python3.9[154368]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:24:51 localhost systemd[1]: Reloading. Dec 2 04:24:51 localhost systemd-rc-local-generator[154390]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:51 localhost systemd-sysv-generator[154393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54235 DF PROTO=TCP SPT=39826 DPT=9101 SEQ=595113565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47909D000000000001030307) Dec 2 04:24:52 localhost python3.9[154450]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:24:52 localhost systemd[1]: Reloading. Dec 2 04:24:52 localhost systemd-sysv-generator[154480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:52 localhost systemd-rc-local-generator[154475]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:53 localhost systemd[1]: Starting ovn_controller container... Dec 2 04:24:53 localhost systemd[1]: Started libcrun container. Dec 2 04:24:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34407dfb17e4d44a7094dfc01c3723ad0f2347db77e802073af38c7ff4fca0cd/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 2 04:24:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:24:53 localhost podman[154491]: 2025-12-02 09:24:53.22714928 +0000 UTC m=+0.159252883 container init cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:24:53 localhost ovn_controller[154505]: + sudo -E kolla_set_configs Dec 2 04:24:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:24:53 localhost podman[154491]: 2025-12-02 09:24:53.271769757 +0000 UTC m=+0.203873310 container start cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:24:53 localhost edpm-start-podman-container[154491]: ovn_controller Dec 2 04:24:53 localhost systemd[1]: Created slice User Slice of UID 0. Dec 2 04:24:53 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 2 04:24:53 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 2 04:24:53 localhost systemd[1]: Starting User Manager for UID 0... Dec 2 04:24:53 localhost edpm-start-podman-container[154490]: Creating additional drop-in dependency for "ovn_controller" (cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782) Dec 2 04:24:53 localhost systemd[1]: Reloading. Dec 2 04:24:53 localhost podman[154512]: 2025-12-02 09:24:53.415149692 +0000 UTC m=+0.140036087 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:24:53 localhost podman[154512]: 2025-12-02 09:24:53.432546339 +0000 UTC m=+0.157432754 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:24:53 localhost podman[154512]: unhealthy Dec 2 04:24:53 localhost systemd[154535]: Queued start job for default target Main User Target. Dec 2 04:24:53 localhost systemd[154535]: Created slice User Application Slice. Dec 2 04:24:53 localhost systemd[154535]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 2 04:24:53 localhost systemd[154535]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:24:53 localhost systemd[154535]: Reached target Paths. Dec 2 04:24:53 localhost systemd[154535]: Reached target Timers. Dec 2 04:24:53 localhost systemd[154535]: Starting D-Bus User Message Bus Socket... Dec 2 04:24:53 localhost systemd[154535]: Starting Create User's Volatile Files and Directories... Dec 2 04:24:53 localhost systemd[154535]: Finished Create User's Volatile Files and Directories. Dec 2 04:24:53 localhost systemd[154535]: Listening on D-Bus User Message Bus Socket. Dec 2 04:24:53 localhost systemd[154535]: Reached target Sockets. Dec 2 04:24:53 localhost systemd-rc-local-generator[154594]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:24:53 localhost systemd[154535]: Reached target Basic System. Dec 2 04:24:53 localhost systemd[154535]: Reached target Main User Target. Dec 2 04:24:53 localhost systemd[154535]: Startup finished in 113ms. Dec 2 04:24:53 localhost systemd-sysv-generator[154598]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:24:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:24:53 localhost systemd[1]: tmp-crun.cddUDf.mount: Deactivated successfully. Dec 2 04:24:53 localhost systemd[1]: Started User Manager for UID 0. Dec 2 04:24:53 localhost systemd[1]: Started ovn_controller container. Dec 2 04:24:53 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:24:53 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Failed with result 'exit-code'. Dec 2 04:24:53 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation. Dec 2 04:24:53 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:24:53 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:24:53 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:24:53 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:24:53 localhost systemd[1]: Started Session c12 of User root. Dec 2 04:24:53 localhost ovn_controller[154505]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:24:53 localhost ovn_controller[154505]: INFO:__main__:Validating config file Dec 2 04:24:53 localhost ovn_controller[154505]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:24:53 localhost ovn_controller[154505]: INFO:__main__:Writing out command to execute Dec 2 04:24:53 localhost systemd[1]: session-c12.scope: Deactivated successfully. Dec 2 04:24:53 localhost ovn_controller[154505]: ++ cat /run_command Dec 2 04:24:53 localhost ovn_controller[154505]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 2 04:24:53 localhost ovn_controller[154505]: + ARGS= Dec 2 04:24:53 localhost ovn_controller[154505]: + sudo kolla_copy_cacerts Dec 2 04:24:53 localhost systemd[1]: Started Session c13 of User root. Dec 2 04:24:53 localhost ovn_controller[154505]: + [[ ! -n '' ]] Dec 2 04:24:53 localhost ovn_controller[154505]: + . kolla_extend_start Dec 2 04:24:53 localhost ovn_controller[154505]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Dec 2 04:24:53 localhost ovn_controller[154505]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 2 04:24:53 localhost ovn_controller[154505]: + umask 0022 Dec 2 04:24:53 localhost ovn_controller[154505]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Dec 2 04:24:53 localhost systemd[1]: session-c13.scope: Deactivated successfully. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00004|main|INFO|OVS IDL reconnected, force recompute. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00013|main|INFO|OVS feature set changed, force recompute. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00022|main|INFO|OVS feature set changed, force recompute. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00023|ovn_bfd|INFO|Disabled BFD on interface ovn-be95dc-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00024|ovn_bfd|INFO|Disabled BFD on interface ovn-2587fe-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00025|ovn_bfd|INFO|Disabled BFD on interface ovn-4d166c-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00026|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00027|binding|INFO|Claiming lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a for this chassis. Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00028|binding|INFO|4a318f6a-b3c1-4690-8246-f7d046ccd64a: Claiming fa:16:3e:26:b2:03 192.168.0.102 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00029|binding|INFO|Removing lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a ovn-installed in OVS Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-be95dc-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-2587fe-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00032|ovn_bfd|INFO|Enabled BFD on interface ovn-4d166c-0 Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00033|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00034|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00035|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00036|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:53 localhost ovn_controller[154505]: 2025-12-02T09:24:53Z|00037|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:54 localhost ovn_controller[154505]: 2025-12-02T09:24:54Z|00038|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:54 localhost python3.9[154708]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:54 localhost ovs-vsctl[154709]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Dec 2 04:24:54 localhost ovn_controller[154505]: 2025-12-02T09:24:54Z|00039|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:55 localhost ovn_controller[154505]: 2025-12-02T09:24:55Z|00040|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:55 localhost python3.9[154801]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:55 localhost ovs-vsctl[154803]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Dec 2 04:24:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54237 DF PROTO=TCP SPT=39826 DPT=9101 SEQ=595113565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790A9250000000001030307) Dec 2 04:24:55 localhost ovn_controller[154505]: 2025-12-02T09:24:55Z|00041|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:24:56 localhost python3.9[154896]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:24:57 localhost ovs-vsctl[154897]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Dec 2 04:24:57 localhost systemd[1]: session-50.scope: Deactivated successfully. Dec 2 04:24:57 localhost systemd[1]: session-50.scope: Consumed 40.687s CPU time. Dec 2 04:24:57 localhost systemd-logind[757]: Session 50 logged out. Waiting for processes to exit. Dec 2 04:24:57 localhost systemd-logind[757]: Removed session 50. Dec 2 04:24:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54238 DF PROTO=TCP SPT=39826 DPT=9101 SEQ=595113565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790B8E40000000001030307) Dec 2 04:25:01 localhost ovn_controller[154505]: 2025-12-02T09:25:01Z|00042|binding|INFO|Setting lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a ovn-installed in OVS Dec 2 04:25:01 localhost ovn_controller[154505]: 2025-12-02T09:25:01Z|00043|binding|INFO|Setting lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a up in Southbound Dec 2 04:25:03 localhost sshd[154991]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:25:03 localhost systemd-logind[757]: New session 52 of user zuul. Dec 2 04:25:03 localhost systemd[1]: Started Session 52 of User zuul. Dec 2 04:25:03 localhost systemd[1]: Stopping User Manager for UID 0... Dec 2 04:25:03 localhost systemd[154535]: Activating special unit Exit the Session... Dec 2 04:25:03 localhost systemd[154535]: Stopped target Main User Target. Dec 2 04:25:03 localhost systemd[154535]: Stopped target Basic System. Dec 2 04:25:03 localhost systemd[154535]: Stopped target Paths. Dec 2 04:25:03 localhost systemd[154535]: Stopped target Sockets. Dec 2 04:25:03 localhost systemd[154535]: Stopped target Timers. Dec 2 04:25:03 localhost systemd[154535]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:25:03 localhost systemd[154535]: Closed D-Bus User Message Bus Socket. Dec 2 04:25:03 localhost systemd[154535]: Stopped Create User's Volatile Files and Directories. Dec 2 04:25:03 localhost systemd[154535]: Removed slice User Application Slice. Dec 2 04:25:03 localhost systemd[154535]: Reached target Shutdown. Dec 2 04:25:03 localhost systemd[154535]: Finished Exit the Session. Dec 2 04:25:03 localhost systemd[154535]: Reached target Exit the Session. Dec 2 04:25:03 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 2 04:25:03 localhost systemd[1]: Stopped User Manager for UID 0. Dec 2 04:25:03 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 2 04:25:03 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 2 04:25:03 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 2 04:25:03 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 2 04:25:03 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 2 04:25:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12676 DF PROTO=TCP SPT=48914 DPT=9102 SEQ=1810178562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790CABE0000000001030307) Dec 2 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38771 DF PROTO=TCP SPT=45276 DPT=9105 SEQ=683573260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790CB3E0000000001030307) Dec 2 04:25:04 localhost python3.9[155086]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:25:05 localhost python3.9[155182]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:06 localhost python3.9[155274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:06 localhost python3.9[155366]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12678 DF PROTO=TCP SPT=48914 DPT=9102 SEQ=1810178562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790D6E50000000001030307) Dec 2 04:25:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42918 DF PROTO=TCP SPT=34084 DPT=9100 SEQ=4267583192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790E2A40000000001030307) Dec 2 04:25:10 localhost python3.9[155458]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:10 localhost python3.9[155550]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:11 localhost python3.9[155641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:25:12 localhost python3.9[155734]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 2 04:25:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5607 DF PROTO=TCP SPT=33202 DPT=9882 SEQ=1109634363 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790EEE40000000001030307) Dec 2 04:25:13 localhost python3.9[155824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:13 localhost python3.9[155897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667512.672773-219-149392302382990/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:14 localhost python3.9[155987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:15 localhost python3.9[156060]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667514.000656-264-193791946797672/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:16 localhost python3.9[156152]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42920 DF PROTO=TCP SPT=34084 DPT=9100 SEQ=4267583192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4790FA650000000001030307) Dec 2 04:25:16 localhost python3.9[156206]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12680 DF PROTO=TCP SPT=48914 DPT=9102 SEQ=1810178562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479107E40000000001030307) Dec 2 04:25:21 localhost python3.9[156300]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:25:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19090 DF PROTO=TCP SPT=42826 DPT=9101 SEQ=3732848012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479112300000000001030307) Dec 2 04:25:23 localhost python3.9[156393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:23 localhost python3.9[156464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667522.6247692-375-266210652669539/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:24 localhost python3.9[156554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:25:24 localhost podman[156623]: 2025-12-02 09:25:24.455173358 +0000 UTC m=+0.091788707 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:25:24 localhost ovn_controller[154505]: 2025-12-02T09:25:24Z|00044|memory|INFO|18748 kB peak resident set size after 30.7 seconds Dec 2 04:25:24 localhost ovn_controller[154505]: 2025-12-02T09:25:24Z|00045|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:1045 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:76 lflow-cache-entries-cache-matches:195 lflow-cache-size-KB:289 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:154 ofctrl_installed_flow_usage-KB:111 ofctrl_sb_flow_ref_usage-KB:67 Dec 2 04:25:24 localhost podman[156623]: 2025-12-02 09:25:24.495115584 +0000 UTC m=+0.131730953 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:25:24 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:25:24 localhost python3.9[156626]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667523.6740825-375-101572207563676/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19092 DF PROTO=TCP SPT=42826 DPT=9101 SEQ=3732848012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47911E250000000001030307) Dec 2 04:25:26 localhost python3.9[156740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:26 localhost python3.9[156811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667525.7406945-507-85302101226911/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:27 localhost python3.9[156901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:28 localhost python3.9[156972]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667526.8203483-507-214270713255369/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:29 localhost python3.9[157062]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19093 DF PROTO=TCP SPT=42826 DPT=9101 SEQ=3732848012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47912DE50000000001030307) Dec 2 04:25:30 localhost python3.9[157156]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:30 localhost python3.9[157248]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:31 localhost python3.9[157296]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:31 localhost ovn_controller[154505]: 2025-12-02T09:25:31Z|00046|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Dec 2 04:25:32 localhost python3.9[157388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:32 localhost python3.9[157436]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:33 localhost python3.9[157528]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24345 DF PROTO=TCP SPT=46936 DPT=9102 SEQ=4063656171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47913FED0000000001030307) Dec 2 04:25:34 localhost python3.9[157620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54783 DF PROTO=TCP SPT=42942 DPT=9105 SEQ=1689485826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791406E0000000001030307) Dec 2 04:25:34 localhost python3.9[157668]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:35 localhost python3.9[157760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:35 localhost python3.9[157808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:36 localhost python3.9[157900]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:25:36 localhost systemd[1]: Reloading. Dec 2 04:25:36 localhost systemd-rc-local-generator[157925]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:25:36 localhost systemd-sysv-generator[157930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:25:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:25:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24347 DF PROTO=TCP SPT=46936 DPT=9102 SEQ=4063656171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47914BE40000000001030307) Dec 2 04:25:37 localhost python3.9[158030]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:38 localhost python3.9[158078]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:38 localhost python3.9[158170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:39 localhost python3.9[158218]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61653 DF PROTO=TCP SPT=55884 DPT=9100 SEQ=1331993231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479157E50000000001030307) Dec 2 04:25:40 localhost python3.9[158310]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:25:40 localhost systemd[1]: Reloading. Dec 2 04:25:40 localhost systemd-rc-local-generator[158337]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:25:40 localhost systemd-sysv-generator[158341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:25:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:25:40 localhost systemd[1]: Starting Create netns directory... Dec 2 04:25:40 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:25:40 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:25:40 localhost systemd[1]: Finished Create netns directory. Dec 2 04:25:42 localhost python3.9[158445]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28816 DF PROTO=TCP SPT=58718 DPT=9882 SEQ=1619514852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479163E40000000001030307) Dec 2 04:25:43 localhost python3.9[158537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:43 localhost python3.9[158610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667542.8706791-960-222331736408878/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:44 localhost python3.9[158702]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:25:45 localhost python3.9[158794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:25:45 localhost python3.9[158869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667544.956208-1035-274216101953328/.source.json _original_basename=.tbd4n10c follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61655 DF PROTO=TCP SPT=55884 DPT=9100 SEQ=1331993231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47916FA40000000001030307) Dec 2 04:25:46 localhost python3.9[158961]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54787 DF PROTO=TCP SPT=42942 DPT=9105 SEQ=1689485826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47917BE40000000001030307) Dec 2 04:25:49 localhost python3.9[159218]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Dec 2 04:25:50 localhost python3.9[159310]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:25:51 localhost python3.9[159402]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:25:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47284 DF PROTO=TCP SPT=59402 DPT=9101 SEQ=3481145413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791875F0000000001030307) Dec 2 04:25:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47286 DF PROTO=TCP SPT=59402 DPT=9101 SEQ=3481145413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479193650000000001030307) Dec 2 04:25:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:25:55 localhost podman[159479]: 2025-12-02 09:25:55.435349595 +0000 UTC m=+0.074772593 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller) Dec 2 04:25:55 localhost podman[159479]: 2025-12-02 09:25:55.559117674 +0000 UTC m=+0.198540652 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:25:55 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:25:55 localhost python3[159544]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:25:56 localhost python3[159544]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",#012 "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:29:20.327314945Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784141054,#012 "VirtualSize": 784141054,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",#012 "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",#012 "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Dec 2 04:25:56 localhost podman[159597]: 2025-12-02 09:25:56.255154238 +0000 UTC m=+0.081471723 container remove 1843037d120f89137a6810a44de8833dcc9687cfe6e1d91311f20b712d090d85 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd1544001d5773d0045aaf61439ef5e02'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 2 04:25:56 localhost python3[159544]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Dec 2 04:25:56 localhost podman[159611]: Dec 2 04:25:56 localhost podman[159611]: 2025-12-02 09:25:56.367969044 +0000 UTC m=+0.092505836 container create 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true) Dec 2 04:25:56 localhost podman[159611]: 2025-12-02 09:25:56.321605189 +0000 UTC m=+0.046142001 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:25:56 localhost python3[159544]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:25:57 localhost python3.9[159741]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:57 localhost python3.9[159835]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:58 localhost python3.9[159881]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:25:59 localhost python3.9[159972]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667558.395957-1299-261375459293728/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:25:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47287 DF PROTO=TCP SPT=59402 DPT=9101 SEQ=3481145413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791A3240000000001030307) Dec 2 04:25:59 localhost python3.9[160018]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:25:59 localhost systemd[1]: Reloading. Dec 2 04:26:00 localhost systemd-rc-local-generator[160040]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:00 localhost systemd-sysv-generator[160046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:00 localhost python3.9[160130]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:00 localhost systemd[1]: Reloading. Dec 2 04:26:00 localhost systemd-rc-local-generator[160173]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:00 localhost systemd-sysv-generator[160178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:01 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 2 04:26:01 localhost systemd[1]: tmp-crun.6cWxLT.mount: Deactivated successfully. Dec 2 04:26:01 localhost systemd[1]: Started libcrun container. Dec 2 04:26:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f360a981842424a8567fc7a0067d84cd0b544fe5f86f8a9d8455b05b782d3b1b/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:26:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f360a981842424a8567fc7a0067d84cd0b544fe5f86f8a9d8455b05b782d3b1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:26:01 localhost podman[160202]: 2025-12-02 09:26:01.27821411 +0000 UTC m=+0.159534024 container init 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + sudo -E kolla_set_configs Dec 2 04:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:26:01 localhost podman[160202]: 2025-12-02 09:26:01.317544298 +0000 UTC m=+0.198864222 container start 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:26:01 localhost edpm-start-podman-container[160202]: ovn_metadata_agent Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Validating config file Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Copying service configuration files Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Writing out command to execute Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/595e1c9b-709c-41d2-9212-0b18b13291a8.conf Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: ++ cat /run_command Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + CMD=neutron-ovn-metadata-agent Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + ARGS= Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + sudo kolla_copy_cacerts Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: Running command: 'neutron-ovn-metadata-agent' Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + [[ ! -n '' ]] Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + . kolla_extend_start Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + umask 0022 Dec 2 04:26:01 localhost ovn_metadata_agent[160216]: + exec neutron-ovn-metadata-agent Dec 2 04:26:01 localhost podman[160224]: 2025-12-02 09:26:01.405322298 +0000 UTC m=+0.081973616 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:26:01 localhost edpm-start-podman-container[160201]: Creating additional drop-in dependency for "ovn_metadata_agent" (34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb) Dec 2 04:26:01 localhost podman[160224]: 2025-12-02 09:26:01.487095057 +0000 UTC m=+0.163746405 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 04:26:01 localhost systemd[1]: Reloading. Dec 2 04:26:01 localhost systemd-rc-local-generator[160286]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:01 localhost systemd-sysv-generator[160292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:01 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:26:01 localhost systemd[1]: Started ovn_metadata_agent container. Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.973 160221 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.973 160221 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.973 160221 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.974 160221 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.975 160221 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.976 160221 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.977 160221 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.978 160221 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.979 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.980 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.981 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.982 160221 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.983 160221 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.984 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.985 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.986 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:02 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.987 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.988 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.989 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.990 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.991 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.992 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.993 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.994 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.995 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.996 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.997 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.998 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:02.999 160221 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.000 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.001 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.002 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.003 160221 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.011 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.012 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.012 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.012 160221 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.012 160221 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.034 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd (UUID: cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.049 160221 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.049 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.049 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.049 160221 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.051 160221 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.056 160221 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.063 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:03 192.168.0.102'], port_security=['fa:16:3e:26:b2:03 192.168.0.102'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.102/24', 'neutron:device_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005541913.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1725f11b-f73c-4c4f-b3d3-772d68fcc09e 23293c48-39ca-43a0-a462-ebc8626a7f6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a318f6a-b3c1-4690-8246-f7d046ccd64a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.064 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '6e7f49c3-b0f2-5de8-9eab-f67d22eadf7d', 'neutron:ovn-metadata-sb-cfg': '1'}, name=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, nb_cfg_timestamp=1764667502569, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.065 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4a318f6a-b3c1-4690-8246-f7d046ccd64a in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 bound to our chassis on insert#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.066 160221 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.066 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.067 160221 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.067 160221 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.067 160221 INFO oslo_service.service [-] Starting 1 workers#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.069 160221 DEBUG oslo_service.service [-] Started child 160335 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.072 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 595e1c9b-709c-41d2-9212-0b18b13291a8#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.073 160221 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpfc5bqu1f/privsep.sock']#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.074 160335 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1946781'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.111 160335 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.112 160335 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.112 160335 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.115 160335 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.116 160335 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.129 160335 INFO eventlet.wsgi.server [-] (160335) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Dec 2 04:26:03 localhost systemd[1]: session-52.scope: Deactivated successfully. Dec 2 04:26:03 localhost systemd[1]: session-52.scope: Consumed 32.054s CPU time. Dec 2 04:26:03 localhost systemd-logind[757]: Session 52 logged out. Waiting for processes to exit. Dec 2 04:26:03 localhost systemd-logind[757]: Removed session 52. Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.702 160221 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.703 160221 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfc5bqu1f/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.600 160340 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.606 160340 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.609 160340 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.610 160340 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160340#033[00m Dec 2 04:26:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:03.706 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7f51690a-465a-4109-b7a5-6a9dd0c10206]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4895 DF PROTO=TCP SPT=38356 DPT=9102 SEQ=2803013807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791B51D0000000001030307) Dec 2 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61541 DF PROTO=TCP SPT=48654 DPT=9105 SEQ=1276867292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791B59E0000000001030307) Dec 2 04:26:04 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:04.140 160340 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:04.140 160340 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:04.140 160340 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:04.668 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[37691d4f-3e6e-466e-92ff-faef31c45e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:04 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:04.670 160221 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6quvgmqh/privsep.sock']#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.586 160221 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.587 160221 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6quvgmqh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.364 160351 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.385 160351 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.400 160351 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.401 160351 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160351#033[00m Dec 2 04:26:05 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:05.590 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[82d7f118-e244-4378-92e2-39e53a3bcc00]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.072 160351 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.072 160351 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.072 160351 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.704 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[40b6147b-e62d-450f-9233-492ec02f4932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.708 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[192a42ee-982d-415b-b3f6-1623bbffd307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.746 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[a7107976-20c4-48a3-be58-49ada6436a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.765 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[949eaf20-7b71-4041-9804-2aa6d1e747a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap595e1c9b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e8:5a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7143, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 104, 'tx_packets': 68, 'rx_bytes': 8926, 'tx_bytes': 7143, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647718, 'reachable_time': 39342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 17, 'outoctets': 1164, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 17, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 1164, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 17, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 160361, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.783 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[3d892970-efdf-442b-aab3-fff7c20bd661]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap595e1c9b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647724, 'tstamp': 647724}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160362, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap595e1c9b-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647728, 'tstamp': 647728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160362, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::a9fe:a9fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647727, 'tstamp': 647727}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160362, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:5a19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647718, 'tstamp': 647718}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160362, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.844 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[01c01976-ce1e-4337-9de6-aa3c438883a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.846 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap595e1c9b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.851 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap595e1c9b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.852 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.852 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap595e1c9b-70, col_values=(('external_ids', {'iface-id': 'd6e7da3f-8574-49e0-8ba1-2f642b3cec92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.853 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:26:06 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:06.857 160221 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp1mspp3cm/privsep.sock']#033[00m Dec 2 04:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4897 DF PROTO=TCP SPT=38356 DPT=9102 SEQ=2803013807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791C1250000000001030307) Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.339 160221 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.340 160221 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1mspp3cm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:07.459 160371 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:07.487 160371 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:07.489 160371 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:07.489 160371 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160371#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.344 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6082e2-2eee-40b3-83fa-3840a6de9e75]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.803 160371 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.803 160371 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:26:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:08.803 160371 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.315 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[d731f87c-b725-4236-a0a2-3ae00f1dc5ea]: (4, ['ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8']) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.318 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, column=external_ids, values=({'neutron:ovn-metadata-id': '6e7f49c3-b0f2-5de8-9eab-f67d22eadf7d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.319 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.320 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.472 160221 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.473 160221 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.474 160221 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.474 160221 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.474 160221 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.475 160221 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.475 160221 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.476 160221 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.476 160221 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.477 160221 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.477 160221 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.477 160221 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.478 160221 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.478 160221 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.479 160221 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.479 160221 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.480 160221 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.480 160221 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.480 160221 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.481 160221 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.481 160221 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.482 160221 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.482 160221 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.483 160221 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.483 160221 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.484 160221 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.484 160221 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.485 160221 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.485 160221 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.486 160221 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.486 160221 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.486 160221 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.487 160221 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.487 160221 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.487 160221 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.488 160221 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.488 160221 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.489 160221 DEBUG oslo_service.service [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.490 160221 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.490 160221 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.490 160221 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.490 160221 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.491 160221 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.491 160221 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.491 160221 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.492 160221 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.492 160221 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.492 160221 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.492 160221 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.493 160221 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.493 160221 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.493 160221 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.494 160221 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.494 160221 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.494 160221 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.495 160221 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.495 160221 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.495 160221 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.495 160221 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.496 160221 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.496 160221 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.496 160221 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.497 160221 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.497 160221 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.497 160221 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.498 160221 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.498 160221 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.498 160221 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.498 160221 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.499 160221 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.499 160221 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.499 160221 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.500 160221 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.500 160221 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.500 160221 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.501 160221 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.501 160221 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.501 160221 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.501 160221 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.502 160221 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.502 160221 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.502 160221 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.503 160221 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.503 160221 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.503 160221 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.504 160221 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.504 160221 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.504 160221 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.504 160221 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.505 160221 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.505 160221 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.505 160221 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.506 160221 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.506 160221 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.506 160221 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.506 160221 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.507 160221 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.507 160221 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.507 160221 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.508 160221 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.508 160221 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.508 160221 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.509 160221 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.509 160221 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.509 160221 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.509 160221 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.510 160221 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.510 160221 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.510 160221 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.511 160221 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.511 160221 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.511 160221 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.512 160221 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.512 160221 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.512 160221 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.513 160221 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.513 160221 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.513 160221 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.513 160221 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.514 160221 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.514 160221 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.514 160221 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.515 160221 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.515 160221 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.515 160221 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.516 160221 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.516 160221 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.516 160221 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.517 160221 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.517 160221 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.517 160221 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.517 160221 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.518 160221 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.518 160221 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.518 160221 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.519 160221 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.519 160221 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.519 160221 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.520 160221 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.520 160221 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.520 160221 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.520 160221 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.521 160221 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.521 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.521 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.522 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.522 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.522 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.522 160221 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.523 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.523 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.523 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.524 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.524 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.524 160221 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.524 160221 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.525 160221 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.525 160221 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.525 160221 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.525 160221 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.526 160221 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.526 160221 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.526 160221 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.527 160221 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.527 160221 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.527 160221 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.528 160221 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.528 160221 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.528 160221 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.528 160221 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.529 160221 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.529 160221 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.529 160221 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.529 160221 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.530 160221 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.530 160221 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.530 160221 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.531 160221 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.531 160221 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.531 160221 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.531 160221 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.532 160221 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.532 160221 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.532 160221 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.533 160221 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.533 160221 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.533 160221 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.534 160221 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.534 160221 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.534 160221 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.534 160221 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.535 160221 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.535 160221 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.535 160221 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.536 160221 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.536 160221 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.536 160221 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.537 160221 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.537 160221 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.537 160221 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.537 160221 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.538 160221 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.538 160221 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.538 160221 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.539 160221 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.539 160221 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.539 160221 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.539 160221 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.540 160221 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.540 160221 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.540 160221 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.540 160221 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.540 160221 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.541 160221 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.542 160221 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.542 160221 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.542 160221 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.542 160221 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.542 160221 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.543 160221 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.544 160221 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.544 160221 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.544 160221 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.544 160221 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.544 160221 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.545 160221 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.545 160221 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.545 160221 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.545 160221 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.545 160221 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.546 160221 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.547 160221 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.547 160221 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.547 160221 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.547 160221 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.547 160221 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.548 160221 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.548 160221 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.548 160221 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.548 160221 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.548 160221 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.549 160221 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.549 160221 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.549 160221 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.549 160221 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.549 160221 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.550 160221 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.551 160221 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.551 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.551 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.551 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.551 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.552 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.552 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.552 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.552 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.552 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.553 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.553 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.553 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.553 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.553 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.554 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.554 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.554 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.554 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.554 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.555 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.556 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.556 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.556 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.556 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.556 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.557 160221 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.557 160221 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.557 160221 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.557 160221 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.557 160221 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:26:09 localhost ovn_metadata_agent[160216]: 2025-12-02 09:26:09.558 160221 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:26:09 localhost sshd[160376]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:26:09 localhost systemd-logind[757]: New session 53 of user zuul. Dec 2 04:26:09 localhost systemd[1]: Started Session 53 of User zuul. Dec 2 04:26:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44628 DF PROTO=TCP SPT=45144 DPT=9100 SEQ=2318657959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791CD240000000001030307) Dec 2 04:26:10 localhost python3.9[160469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:26:12 localhost python3.9[160565]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57191 DF PROTO=TCP SPT=50190 DPT=9882 SEQ=2419112481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791D9240000000001030307) Dec 2 04:26:13 localhost python3.9[160669]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:13 localhost systemd[1]: libpod-9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1.scope: Deactivated successfully. Dec 2 04:26:13 localhost podman[160670]: 2025-12-02 09:26:13.277411132 +0000 UTC m=+0.058042261 container died 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 2 04:26:13 localhost podman[160670]: 2025-12-02 09:26:13.32432964 +0000 UTC m=+0.104960759 container cleanup 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, release=1761123044) Dec 2 04:26:13 localhost podman[160684]: 2025-12-02 09:26:13.420729852 +0000 UTC m=+0.099679139 container remove 9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, distribution-scope=public, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 2 04:26:13 localhost systemd[1]: libpod-conmon-9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1.scope: Deactivated successfully. Dec 2 04:26:14 localhost systemd[1]: var-lib-containers-storage-overlay-93ad9083e7cc3e7616303b5d13e7a101d6cbdaa325d96e32c757f24ef781f581-merged.mount: Deactivated successfully. Dec 2 04:26:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eb9f01827a63f0bcbf5f5e5d764a5c07546957865a50dee3f13116030c748e1-userdata-shm.mount: Deactivated successfully. Dec 2 04:26:14 localhost python3.9[160791]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:26:14 localhost systemd[1]: Reloading. Dec 2 04:26:14 localhost systemd-rc-local-generator[160817]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:14 localhost systemd-sysv-generator[160823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:15 localhost python3.9[160918]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:26:15 localhost network[160935]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:26:15 localhost network[160936]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:26:15 localhost network[160937]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:26:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44630 DF PROTO=TCP SPT=45144 DPT=9100 SEQ=2318657959 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791E4E50000000001030307) Dec 2 04:26:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61545 DF PROTO=TCP SPT=48654 DPT=9105 SEQ=1276867292 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791F1E40000000001030307) Dec 2 04:26:21 localhost python3.9[161138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:22 localhost systemd[1]: Reloading. Dec 2 04:26:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19699 DF PROTO=TCP SPT=39292 DPT=9101 SEQ=1521133884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4791FC900000000001030307) Dec 2 04:26:22 localhost systemd-sysv-generator[161170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:22 localhost systemd-rc-local-generator[161165]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:22 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Dec 2 04:26:23 localhost python3.9[161269]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:23 localhost python3.9[161362]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:24 localhost python3.9[161455]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19701 DF PROTO=TCP SPT=39292 DPT=9101 SEQ=1521133884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479208A40000000001030307) Dec 2 04:26:25 localhost python3.9[161548]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:26:26 localhost podman[161642]: 2025-12-02 09:26:26.277294158 +0000 UTC m=+0.086102123 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:26:26 localhost podman[161642]: 2025-12-02 09:26:26.316930823 +0000 UTC m=+0.125738818 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:26:26 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:26:26 localhost python3.9[161641]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:27 localhost python3.9[161760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:26:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19702 DF PROTO=TCP SPT=39292 DPT=9101 SEQ=1521133884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479218640000000001030307) Dec 2 04:26:29 localhost python3.9[161853]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:30 localhost python3.9[161945]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:31 localhost python3.9[162037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:26:32 localhost systemd[1]: tmp-crun.AHl9rr.mount: Deactivated successfully. Dec 2 04:26:32 localhost podman[162129]: 2025-12-02 09:26:32.297012653 +0000 UTC m=+0.095636124 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:26:32 localhost podman[162129]: 2025-12-02 09:26:32.304043058 +0000 UTC m=+0.102666569 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 2 04:26:32 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:26:32 localhost python3.9[162130]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:32 localhost python3.9[162239]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:33 localhost python3.9[162331]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25749 DF PROTO=TCP SPT=52052 DPT=9102 SEQ=3936899043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47922A4E0000000001030307) Dec 2 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26660 DF PROTO=TCP SPT=42600 DPT=9105 SEQ=803090811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47922ACF0000000001030307) Dec 2 04:26:34 localhost python3.9[162423]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:34 localhost python3.9[162515]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:35 localhost python3.9[162607]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:36 localhost python3.9[162699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:36 localhost python3.9[162791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25751 DF PROTO=TCP SPT=52052 DPT=9102 SEQ=3936899043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479236640000000001030307) Dec 2 04:26:37 localhost python3.9[162883]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:37 localhost python3.9[162975]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:38 localhost python3.9[163067]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:26:39 localhost python3.9[163159]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28819 DF PROTO=TCP SPT=58718 DPT=9882 SEQ=1619514852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479241E40000000001030307) Dec 2 04:26:40 localhost python3.9[163251]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:26:41 localhost python3.9[163343]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:26:41 localhost systemd[1]: Reloading. Dec 2 04:26:41 localhost systemd-rc-local-generator[163366]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:26:41 localhost systemd-sysv-generator[163371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:26:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:26:42 localhost python3.9[163470]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61658 DF PROTO=TCP SPT=55884 DPT=9100 SEQ=1331993231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47924DE40000000001030307) Dec 2 04:26:43 localhost python3.9[163563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:44 localhost python3.9[163656]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:44 localhost python3.9[163749]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:45 localhost python3.9[163842]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:46 localhost python3.9[163935]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23378 DF PROTO=TCP SPT=35140 DPT=9100 SEQ=816941798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47925A240000000001030307) Dec 2 04:26:46 localhost python3.9[164028]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:26:48 localhost python3.9[164121]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Dec 2 04:26:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25753 DF PROTO=TCP SPT=52052 DPT=9102 SEQ=3936899043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479265E40000000001030307) Dec 2 04:26:49 localhost python3.9[164214]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:26:50 localhost python3.9[164313]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:26:51 localhost python3.9[164413]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:26:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13365 DF PROTO=TCP SPT=49588 DPT=9101 SEQ=2064776806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479271C00000000001030307) Dec 2 04:26:53 localhost python3.9[164467]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:26:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13367 DF PROTO=TCP SPT=49588 DPT=9101 SEQ=2064776806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47927DE40000000001030307) Dec 2 04:26:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:26:56 localhost podman[164470]: 2025-12-02 09:26:56.538010526 +0000 UTC m=+0.160081053 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Dec 2 04:26:56 localhost podman[164470]: 2025-12-02 09:26:56.584527222 +0000 UTC m=+0.206597739 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:26:56 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:26:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13368 DF PROTO=TCP SPT=49588 DPT=9101 SEQ=2064776806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47928DA50000000001030307) Dec 2 04:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:27:02 localhost podman[164564]: 2025-12-02 09:27:02.460069856 +0000 UTC m=+0.096580118 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:27:02 localhost podman[164564]: 2025-12-02 09:27:02.46549582 +0000 UTC m=+0.102006052 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:27:02 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:27:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:27:03.005 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:27:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:27:03.006 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:27:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:27:03.008 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:27:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45502 DF PROTO=TCP SPT=46298 DPT=9102 SEQ=3016421781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47929F7D0000000001030307) Dec 2 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44373 DF PROTO=TCP SPT=49598 DPT=9105 SEQ=1362833837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47929FFF0000000001030307) Dec 2 04:27:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45504 DF PROTO=TCP SPT=46298 DPT=9102 SEQ=3016421781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792ABA40000000001030307) Dec 2 04:27:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63715 DF PROTO=TCP SPT=33978 DPT=9100 SEQ=1861443308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792B7650000000001030307) Dec 2 04:27:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:27:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5581cab122d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 2 04:27:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36472 DF PROTO=TCP SPT=34670 DPT=9882 SEQ=1097936100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792C3A50000000001030307) Dec 2 04:27:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:27:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.021 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x565243dd22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 2 04:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63717 DF PROTO=TCP SPT=33978 DPT=9100 SEQ=1861443308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792CF240000000001030307) Dec 2 04:27:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44377 DF PROTO=TCP SPT=49598 DPT=9105 SEQ=1362833837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792DBE40000000001030307) Dec 2 04:27:21 localhost kernel: SELinux: Converting 2759 SID table entries... Dec 2 04:27:21 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Dec 2 04:27:21 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:21 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:21 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:21 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:21 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:21 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:21 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61846 DF PROTO=TCP SPT=38060 DPT=9101 SEQ=933683415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792E6F00000000001030307) Dec 2 04:27:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61848 DF PROTO=TCP SPT=38060 DPT=9101 SEQ=933683415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4792F2E40000000001030307) Dec 2 04:27:27 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=19 res=1 Dec 2 04:27:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:27:27 localhost podman[165712]: 2025-12-02 09:27:27.500626245 +0000 UTC m=+0.114048534 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:27:27 localhost podman[165712]: 2025-12-02 09:27:27.570042341 +0000 UTC m=+0.183464650 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:27:27 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:27:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61849 DF PROTO=TCP SPT=38060 DPT=9101 SEQ=933683415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479302A40000000001030307) Dec 2 04:27:32 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 2 04:27:32 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:32 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:32 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:32 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:32 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:32 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:32 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:33 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=20 res=1 Dec 2 04:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:27:33 localhost systemd[1]: tmp-crun.FClIUm.mount: Deactivated successfully. Dec 2 04:27:33 localhost podman[165746]: 2025-12-02 09:27:33.515221152 +0000 UTC m=+0.129346106 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:27:33 localhost podman[165746]: 2025-12-02 09:27:33.550192002 +0000 UTC m=+0.164316976 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:27:33 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:27:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58257 DF PROTO=TCP SPT=34262 DPT=9102 SEQ=3079829525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479314AD0000000001030307) Dec 2 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12312 DF PROTO=TCP SPT=36842 DPT=9105 SEQ=1737036643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793152E0000000001030307) Dec 2 04:27:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58259 DF PROTO=TCP SPT=34262 DPT=9102 SEQ=3079829525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479320A50000000001030307) Dec 2 04:27:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25145 DF PROTO=TCP SPT=34572 DPT=9100 SEQ=3650209280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47932CA40000000001030307) Dec 2 04:27:42 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 2 04:27:42 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:42 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:42 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:42 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:42 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:42 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:42 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23381 DF PROTO=TCP SPT=35140 DPT=9100 SEQ=816941798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479337E40000000001030307) Dec 2 04:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25147 DF PROTO=TCP SPT=34572 DPT=9100 SEQ=3650209280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479344640000000001030307) Dec 2 04:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58261 DF PROTO=TCP SPT=34262 DPT=9102 SEQ=3079829525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47934FE50000000001030307) Dec 2 04:27:51 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 2 04:27:51 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:27:51 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:27:51 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:27:51 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:27:51 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:27:51 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:27:51 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:27:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3406 DF PROTO=TCP SPT=37430 DPT=9101 SEQ=2658431250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47935C200000000001030307) Dec 2 04:27:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3408 DF PROTO=TCP SPT=37430 DPT=9101 SEQ=2658431250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479368240000000001030307) Dec 2 04:27:58 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=22 res=1 Dec 2 04:27:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:27:58 localhost podman[165785]: 2025-12-02 09:27:58.490597933 +0000 UTC m=+0.108261180 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:27:58 localhost podman[165785]: 2025-12-02 09:27:58.576554082 +0000 UTC m=+0.194217329 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:27:58 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:27:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3409 DF PROTO=TCP SPT=37430 DPT=9101 SEQ=2658431250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479377E40000000001030307) Dec 2 04:28:02 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 2 04:28:02 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:28:02 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:28:02 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:28:02 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:28:02 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:28:02 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:28:02 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:28:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:28:03.006 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:28:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:28:03.008 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:28:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:28:03.011 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:28:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53984 DF PROTO=TCP SPT=47918 DPT=9102 SEQ=3456281276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479389DE0000000001030307) Dec 2 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65302 DF PROTO=TCP SPT=43668 DPT=9105 SEQ=1511092591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47938A5E0000000001030307) Dec 2 04:28:04 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=23 res=1 Dec 2 04:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:28:04 localhost podman[165819]: 2025-12-02 09:28:04.462529264 +0000 UTC m=+0.089171657 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:28:04 localhost podman[165819]: 2025-12-02 09:28:04.472024818 +0000 UTC m=+0.098667241 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:28:04 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:28:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53986 DF PROTO=TCP SPT=47918 DPT=9102 SEQ=3456281276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479395E50000000001030307) Dec 2 04:28:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2404 DF PROTO=TCP SPT=51408 DPT=9100 SEQ=862885942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793A1E40000000001030307) Dec 2 04:28:11 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 2 04:28:11 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:28:11 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:28:11 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:28:11 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:28:11 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:28:11 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:28:11 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:28:12 localhost systemd[1]: Reloading. Dec 2 04:28:12 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=24 res=1 Dec 2 04:28:12 localhost systemd-rc-local-generator[165952]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:28:12 localhost systemd-sysv-generator[165957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:28:12 localhost systemd[1]: Reloading. Dec 2 04:28:12 localhost systemd-rc-local-generator[165991]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:28:12 localhost systemd-sysv-generator[165996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:28:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2755 DF PROTO=TCP SPT=52136 DPT=9882 SEQ=940349716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793ADE40000000001030307) Dec 2 04:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2406 DF PROTO=TCP SPT=51408 DPT=9100 SEQ=862885942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793B9A50000000001030307) Dec 2 04:28:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65306 DF PROTO=TCP SPT=43668 DPT=9105 SEQ=1511092591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793C5E40000000001030307) Dec 2 04:28:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63876 DF PROTO=TCP SPT=51204 DPT=9101 SEQ=3510173948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793D1500000000001030307) Dec 2 04:28:22 localhost kernel: SELinux: Converting 2763 SID table entries... Dec 2 04:28:22 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 2 04:28:22 localhost kernel: SELinux: policy capability open_perms=1 Dec 2 04:28:22 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 2 04:28:22 localhost kernel: SELinux: policy capability always_check_network=0 Dec 2 04:28:22 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 2 04:28:22 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 2 04:28:22 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 2 04:28:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63878 DF PROTO=TCP SPT=51204 DPT=9101 SEQ=3510173948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793DD640000000001030307) Dec 2 04:28:27 localhost dbus-broker-launch[742]: Noticed file-system modification, trigger reload. Dec 2 04:28:27 localhost dbus-broker-launch[748]: avc: op=load_policy lsm=selinux seqno=25 res=1 Dec 2 04:28:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63879 DF PROTO=TCP SPT=51204 DPT=9101 SEQ=3510173948 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793ED250000000001030307) Dec 2 04:28:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:28:29 localhost podman[166203]: 2025-12-02 09:28:29.543335052 +0000 UTC m=+0.128965689 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:28:29 localhost podman[166203]: 2025-12-02 09:28:29.619574394 +0000 UTC m=+0.205204961 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:28:29 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:28:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4721 DF PROTO=TCP SPT=56644 DPT=9102 SEQ=1960687409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793FF0D0000000001030307) Dec 2 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47866 DF PROTO=TCP SPT=34876 DPT=9105 SEQ=2819316595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4793FF8F0000000001030307) Dec 2 04:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:28:35 localhost systemd[1]: tmp-crun.1MDhJ6.mount: Deactivated successfully. Dec 2 04:28:35 localhost podman[166273]: 2025-12-02 09:28:35.549470795 +0000 UTC m=+0.152978398 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Dec 2 04:28:35 localhost podman[166273]: 2025-12-02 09:28:35.578540951 +0000 UTC m=+0.182048474 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:28:35 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:28:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4723 DF PROTO=TCP SPT=56644 DPT=9102 SEQ=1960687409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47940B240000000001030307) Dec 2 04:28:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63458 DF PROTO=TCP SPT=52066 DPT=9100 SEQ=2095280239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479416E40000000001030307) Dec 2 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40791 DF PROTO=TCP SPT=60150 DPT=9882 SEQ=887621075 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479423240000000001030307) Dec 2 04:28:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63460 DF PROTO=TCP SPT=52066 DPT=9100 SEQ=2095280239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47942EA40000000001030307) Dec 2 04:28:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4725 DF PROTO=TCP SPT=56644 DPT=9102 SEQ=1960687409 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47943BE50000000001030307) Dec 2 04:28:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48416 DF PROTO=TCP SPT=38684 DPT=9101 SEQ=3557542384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479446800000000001030307) Dec 2 04:28:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48418 DF PROTO=TCP SPT=38684 DPT=9101 SEQ=3557542384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479452A40000000001030307) Dec 2 04:28:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48419 DF PROTO=TCP SPT=38684 DPT=9101 SEQ=3557542384 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479462640000000001030307) Dec 2 04:29:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:29:01 localhost podman[179628]: 2025-12-02 09:29:01.065485415 +0000 UTC m=+0.702095309 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:29:01 localhost podman[179628]: 2025-12-02 09:29:01.100950325 +0000 UTC m=+0.737560209 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 04:29:01 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:29:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:29:03.008 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:29:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:29:03.008 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:29:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:29:03.010 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:29:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23394 DF PROTO=TCP SPT=40874 DPT=9102 SEQ=1648017291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794743D0000000001030307) Dec 2 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30404 DF PROTO=TCP SPT=47810 DPT=9105 SEQ=2704965570 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479474BF0000000001030307) Dec 2 04:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:29:06 localhost podman[183128]: 2025-12-02 09:29:06.341828571 +0000 UTC m=+0.071537886 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:29:06 localhost podman[183128]: 2025-12-02 09:29:06.376099588 +0000 UTC m=+0.105808863 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent) Dec 2 04:29:06 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23396 DF PROTO=TCP SPT=40874 DPT=9102 SEQ=1648017291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479480640000000001030307) Dec 2 04:29:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2758 DF PROTO=TCP SPT=52136 DPT=9882 SEQ=940349716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47948BE40000000001030307) Dec 2 04:29:12 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 2 04:29:12 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 2 04:29:12 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 2 04:29:12 localhost systemd[1]: sshd.service: Consumed 1.080s CPU time, read 32.0K from disk, written 0B to disk. Dec 2 04:29:12 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 2 04:29:12 localhost systemd[1]: Stopping sshd-keygen.target... Dec 2 04:29:12 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:12 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:12 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 2 04:29:12 localhost systemd[1]: Reached target sshd-keygen.target. Dec 2 04:29:12 localhost systemd[1]: Starting OpenSSH server daemon... Dec 2 04:29:12 localhost sshd[184098]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:29:12 localhost systemd[1]: Started OpenSSH server daemon. Dec 2 04:29:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:12 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2409 DF PROTO=TCP SPT=51408 DPT=9100 SEQ=862885942 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479497E50000000001030307) Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:14 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:29:14 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:29:14 localhost systemd[1]: Reloading. Dec 2 04:29:15 localhost systemd-rc-local-generator[184323]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:15 localhost systemd-sysv-generator[184328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:15 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:29:15 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60019 DF PROTO=TCP SPT=35648 DPT=9100 SEQ=2819697767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794A3E40000000001030307) Dec 2 04:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23398 DF PROTO=TCP SPT=40874 DPT=9102 SEQ=1648017291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794AFE50000000001030307) Dec 2 04:29:19 localhost python3.9[189413]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:19 localhost systemd[1]: Reloading. Dec 2 04:29:19 localhost systemd-sysv-generator[189772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:19 localhost systemd-rc-local-generator[189765]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost python3.9[190195]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:20 localhost systemd[1]: Reloading. Dec 2 04:29:20 localhost systemd-rc-local-generator[190308]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:20 localhost systemd-sysv-generator[190312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:21 localhost python3.9[190670]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:21 localhost systemd[1]: Reloading. Dec 2 04:29:22 localhost systemd-rc-local-generator[190870]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:22 localhost systemd-sysv-generator[190874]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13286 DF PROTO=TCP SPT=33814 DPT=9101 SEQ=674863162 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794BBB00000000001030307) Dec 2 04:29:24 localhost python3.9[191590]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:24 localhost systemd[1]: Reloading. Dec 2 04:29:24 localhost systemd-sysv-generator[191765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:24 localhost systemd-rc-local-generator[191757]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13288 DF PROTO=TCP SPT=33814 DPT=9101 SEQ=674863162 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794C7A40000000001030307) Dec 2 04:29:25 localhost python3.9[192158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:25 localhost systemd[1]: Reloading. Dec 2 04:29:25 localhost systemd-rc-local-generator[192356]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:25 localhost systemd-sysv-generator[192362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost python3.9[192776]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:26 localhost systemd[1]: Reloading. Dec 2 04:29:26 localhost systemd-rc-local-generator[193004]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:26 localhost systemd-sysv-generator[193007]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost python3.9[193379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:27 localhost systemd[1]: Reloading. Dec 2 04:29:27 localhost systemd-rc-local-generator[193590]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:27 localhost systemd-sysv-generator[193597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:28 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:29:28 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:29:28 localhost systemd[1]: man-db-cache-update.service: Consumed 16.483s CPU time. Dec 2 04:29:28 localhost systemd[1]: run-r6512aaa9a49947a7bce575053b2d2eb3.service: Deactivated successfully. Dec 2 04:29:28 localhost systemd[1]: run-r1080bb83e45e428ba54a0498c9e579da.service: Deactivated successfully. Dec 2 04:29:28 localhost python3.9[193916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13289 DF PROTO=TCP SPT=33814 DPT=9101 SEQ=674863162 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794D7640000000001030307) Dec 2 04:29:29 localhost python3.9[194062]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:30 localhost systemd[1]: Reloading. Dec 2 04:29:30 localhost systemd-rc-local-generator[194094]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:30 localhost systemd-sysv-generator[194098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:29:31 localhost systemd[1]: tmp-crun.WhjzPF.mount: Deactivated successfully. Dec 2 04:29:31 localhost podman[194120]: 2025-12-02 09:29:31.474129599 +0000 UTC m=+0.105703822 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:29:31 localhost podman[194120]: 2025-12-02 09:29:31.544738467 +0000 UTC m=+0.176312730 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:29:31 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:29:32 localhost python3.9[194239]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:29:32 localhost systemd[1]: Reloading. Dec 2 04:29:32 localhost systemd-rc-local-generator[194264]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:29:32 localhost systemd-sysv-generator[194269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:29:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38737 DF PROTO=TCP SPT=45840 DPT=9102 SEQ=2720412911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794E96D0000000001030307) Dec 2 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64749 DF PROTO=TCP SPT=51178 DPT=9105 SEQ=986500614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794E9EE0000000001030307) Dec 2 04:29:34 localhost python3.9[194388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:35 localhost python3.9[194501]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:36 localhost python3.9[194614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:29:36 localhost podman[194616]: 2025-12-02 09:29:36.587436046 +0000 UTC m=+0.096971930 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:29:36 localhost podman[194616]: 2025-12-02 09:29:36.62256069 +0000 UTC m=+0.132096534 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:29:36 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:29:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38739 DF PROTO=TCP SPT=45840 DPT=9102 SEQ=2720412911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4794F5650000000001030307) Dec 2 04:29:37 localhost python3.9[194743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:38 localhost python3.9[194856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:38 localhost python3.9[194969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51120 DF PROTO=TCP SPT=59094 DPT=9100 SEQ=827855319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479501640000000001030307) Dec 2 04:29:40 localhost python3.9[195082]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:42 localhost python3.9[195195]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:43 localhost python3.9[195308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23261 DF PROTO=TCP SPT=59448 DPT=9882 SEQ=1754111265 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47950D650000000001030307) Dec 2 04:29:44 localhost python3.9[195421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:44 localhost python3.9[195534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:45 localhost python3.9[195647]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51122 DF PROTO=TCP SPT=59094 DPT=9100 SEQ=827855319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479519250000000001030307) Dec 2 04:29:46 localhost python3.9[195760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:47 localhost python3.9[195873]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 2 04:29:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38741 DF PROTO=TCP SPT=45840 DPT=9102 SEQ=2720412911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479525E40000000001030307) Dec 2 04:29:51 localhost python3.9[195986]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23545 DF PROTO=TCP SPT=43714 DPT=9101 SEQ=1325632831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479530E00000000001030307) Dec 2 04:29:53 localhost python3.9[196096]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:53 localhost python3.9[196206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:54 localhost python3.9[196316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23547 DF PROTO=TCP SPT=43714 DPT=9101 SEQ=1325632831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47953CE40000000001030307) Dec 2 04:29:55 localhost python3.9[196426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:55 localhost python3.9[196536]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:29:56 localhost python3.9[196646]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:57 localhost python3.9[196736]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667796.2832527-1644-176889572486854/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:29:58 localhost python3.9[196846]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:58 localhost python3.9[196936]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667797.7394698-1644-203025047166554/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:29:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23548 DF PROTO=TCP SPT=43714 DPT=9101 SEQ=1325632831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47954CA40000000001030307) Dec 2 04:29:59 localhost python3.9[197046]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:29:59 localhost python3.9[197136]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667798.9228027-1644-158592085148154/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:00 localhost python3.9[197246]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:01 localhost python3.9[197336]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667800.1194727-1644-100293923166625/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:30:01 localhost python3.9[197446]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:01 localhost podman[197447]: 2025-12-02 09:30:01.984392025 +0000 UTC m=+0.344048730 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:30:02 localhost podman[197447]: 2025-12-02 09:30:02.028102568 +0000 UTC m=+0.387759303 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 04:30:02 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:30:02 localhost python3.9[197561]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667801.316876-1644-70596669830048/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:30:03.008 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:30:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:30:03.009 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:30:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:30:03.010 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:30:03 localhost python3.9[197671]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:03 localhost python3.9[197761]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667802.5491602-1644-199823953317529/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20129 DF PROTO=TCP SPT=58394 DPT=9102 SEQ=1061814342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47955E9D0000000001030307) Dec 2 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12755 DF PROTO=TCP SPT=34426 DPT=9105 SEQ=4229317798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47955F1E0000000001030307) Dec 2 04:30:04 localhost python3.9[197871]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:05 localhost python3.9[197959]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667803.6481411-1644-79375034844437/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:05 localhost python3.9[198069]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:06 localhost python3.9[198159]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667805.1673033-1644-70369463435257/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20131 DF PROTO=TCP SPT=58394 DPT=9102 SEQ=1061814342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47956AA40000000001030307) Dec 2 04:30:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:30:07 localhost podman[198270]: 2025-12-02 09:30:07.408977082 +0000 UTC m=+0.085134031 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:30:07 localhost podman[198270]: 2025-12-02 09:30:07.443040777 +0000 UTC m=+0.119197746 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 2 04:30:07 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:30:07 localhost python3.9[198269]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:08 localhost python3.9[198395]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:08 localhost python3.9[198505]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:09 localhost python3.9[198615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:09 localhost python3.9[198725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29080 DF PROTO=TCP SPT=47358 DPT=9100 SEQ=3869807480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479576A40000000001030307) Dec 2 04:30:10 localhost python3.9[198871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:10 localhost python3.9[199037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:11 localhost python3.9[199176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:12 localhost python3.9[199290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:12 localhost python3.9[199418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60022 DF PROTO=TCP SPT=35648 DPT=9100 SEQ=2819697767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479581E40000000001030307) Dec 2 04:30:13 localhost python3.9[199528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:13 localhost python3.9[199638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:15 localhost python3.9[199748]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:15 localhost python3.9[199858]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:16 localhost python3.9[199968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29082 DF PROTO=TCP SPT=47358 DPT=9100 SEQ=3869807480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47958E640000000001030307) Dec 2 04:30:18 localhost python3.9[200078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:19 localhost python3.9[200166]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667818.0274138-2307-91523881032296/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20133 DF PROTO=TCP SPT=58394 DPT=9102 SEQ=1061814342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479599E40000000001030307) Dec 2 04:30:19 localhost python3.9[200276]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:20 localhost python3.9[200364]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667819.173797-2307-237584841824311/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:20 localhost python3.9[200474]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:21 localhost python3.9[200562]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667820.2755363-2307-196115417920863/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:21 localhost python3.9[200672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56915 DF PROTO=TCP SPT=47022 DPT=9101 SEQ=2300905295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795A60F0000000001030307) Dec 2 04:30:22 localhost python3.9[200760]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667821.3090246-2307-68072189585767/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:22 localhost python3.9[200870]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:23 localhost python3.9[200958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667822.5160499-2307-94609702568089/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:24 localhost python3.9[201068]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:24 localhost python3.9[201156]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667823.6544166-2307-58264029808066/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:25 localhost python3.9[201266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56917 DF PROTO=TCP SPT=47022 DPT=9101 SEQ=2300905295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795B2250000000001030307) Dec 2 04:30:25 localhost python3.9[201354]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667824.6975358-2307-127646501766313/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:26 localhost python3.9[201464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:27 localhost python3.9[201552]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667826.0708952-2307-31364897240294/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:27 localhost python3.9[201662]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:28 localhost python3.9[201750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667827.1615138-2307-231653903432193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:28 localhost python3.9[201860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:29 localhost python3.9[201948]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667828.2851064-2307-188761305600476/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56918 DF PROTO=TCP SPT=47022 DPT=9101 SEQ=2300905295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795C1E40000000001030307) Dec 2 04:30:29 localhost python3.9[202058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:30 localhost python3.9[202146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667829.3924565-2307-159348590513793/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:30 localhost python3.9[202256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:31 localhost python3.9[202344]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667830.4346-2307-34396234748468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:31 localhost python3.9[202454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:30:32 localhost podman[202543]: 2025-12-02 09:30:32.404740292 +0000 UTC m=+0.091162825 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 04:30:32 localhost podman[202543]: 2025-12-02 09:30:32.445247805 +0000 UTC m=+0.131670328 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:30:32 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:30:32 localhost python3.9[202542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667831.519853-2307-48793818711791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:33 localhost python3.9[202676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:33 localhost python3.9[202764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667832.62407-2307-135438256528916/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61452 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4027416656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795D3CD0000000001030307) Dec 2 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60140 DF PROTO=TCP SPT=46802 DPT=9105 SEQ=1932042984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795D44E0000000001030307) Dec 2 04:30:34 localhost python3.9[202872]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:35 localhost python3.9[202985]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 2 04:30:36 localhost python3.9[203095]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:36 localhost systemd[1]: Reloading. Dec 2 04:30:36 localhost systemd-rc-local-generator[203124]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:36 localhost systemd-sysv-generator[203128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61454 DF PROTO=TCP SPT=51642 DPT=9102 SEQ=4027416656 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795DFE40000000001030307) Dec 2 04:30:37 localhost systemd[1]: Starting libvirt logging daemon socket... Dec 2 04:30:37 localhost systemd[1]: Listening on libvirt logging daemon socket. Dec 2 04:30:37 localhost systemd[1]: Starting libvirt logging daemon admin socket... Dec 2 04:30:37 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Dec 2 04:30:37 localhost systemd[1]: Starting libvirt logging daemon... Dec 2 04:30:37 localhost systemd[1]: Started libvirt logging daemon. Dec 2 04:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:30:37 localhost podman[203248]: 2025-12-02 09:30:37.848428844 +0000 UTC m=+0.080827793 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:30:37 localhost podman[203248]: 2025-12-02 09:30:37.860491612 +0000 UTC m=+0.092890551 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:30:37 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:30:38 localhost python3.9[203249]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:38 localhost systemd[1]: Reloading. Dec 2 04:30:38 localhost systemd-sysv-generator[203291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:38 localhost systemd-rc-local-generator[203284]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:38 localhost systemd[1]: Starting libvirt nodedev daemon socket... Dec 2 04:30:38 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Dec 2 04:30:38 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Dec 2 04:30:38 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Dec 2 04:30:38 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Dec 2 04:30:38 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Dec 2 04:30:38 localhost systemd[1]: Started libvirt nodedev daemon. Dec 2 04:30:39 localhost python3.9[203441]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:39 localhost systemd[1]: Reloading. Dec 2 04:30:39 localhost systemd-sysv-generator[203471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:39 localhost systemd-rc-local-generator[203468]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:39 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 2 04:30:39 localhost systemd[1]: Starting libvirt proxy daemon socket... Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt proxy daemon socket. Dec 2 04:30:39 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Dec 2 04:30:39 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Dec 2 04:30:39 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Dec 2 04:30:39 localhost systemd[1]: Started libvirt proxy daemon. Dec 2 04:30:39 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 2 04:30:39 localhost setroubleshoot[203478]: Deleting alert c62ace7d-fc71-492d-8738-6cc52b8f8f8f, it is allowed in current policy Dec 2 04:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41808 DF PROTO=TCP SPT=50554 DPT=9100 SEQ=1535800412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795EBA50000000001030307) Dec 2 04:30:40 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Dec 2 04:30:40 localhost python3.9[203618]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:40 localhost systemd[1]: Reloading. Dec 2 04:30:40 localhost systemd-rc-local-generator[203646]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:40 localhost systemd-sysv-generator[203649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt locking daemon socket. Dec 2 04:30:40 localhost systemd[1]: Starting libvirt QEMU daemon socket... Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Dec 2 04:30:40 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Dec 2 04:30:40 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Dec 2 04:30:40 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Dec 2 04:30:40 localhost systemd[1]: Started libvirt QEMU daemon. Dec 2 04:30:40 localhost setroubleshoot[203478]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a3e9145b-2d8e-4e66-ba12-5632331a74ce Dec 2 04:30:40 localhost setroubleshoot[203478]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 2 04:30:40 localhost setroubleshoot[203478]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a3e9145b-2d8e-4e66-ba12-5632331a74ce Dec 2 04:30:40 localhost setroubleshoot[203478]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 2 04:30:41 localhost python3.9[203804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:30:41 localhost systemd[1]: Reloading. Dec 2 04:30:41 localhost systemd-rc-local-generator[203833]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:30:41 localhost systemd-sysv-generator[203837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:30:41 localhost systemd[1]: Starting libvirt secret daemon socket... Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt secret daemon socket. Dec 2 04:30:41 localhost systemd[1]: Starting libvirt secret daemon admin socket... Dec 2 04:30:41 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Dec 2 04:30:41 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Dec 2 04:30:41 localhost systemd[1]: Started libvirt secret daemon. Dec 2 04:30:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40770 DF PROTO=TCP SPT=51764 DPT=9882 SEQ=1877533652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4795F7E40000000001030307) Dec 2 04:30:45 localhost python3.9[203987]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:45 localhost python3.9[204097]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41810 DF PROTO=TCP SPT=50554 DPT=9100 SEQ=1535800412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479603650000000001030307) Dec 2 04:30:46 localhost python3.9[204207]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:47 localhost python3.9[204319]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:30:48 localhost python3.9[204427]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:49 localhost python3.9[204513]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667848.0374932-3171-110454829948638/.source.xml follow=False _original_basename=secret.xml.j2 checksum=45e14b3898e47796a04e3213d8ff716cad2ef6d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60144 DF PROTO=TCP SPT=46802 DPT=9105 SEQ=1932042984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47960FE40000000001030307) Dec 2 04:30:49 localhost python3.9[204623]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c7c8e171-a193-56fb-95fa-8879fcfa7074#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:30:50 localhost python3.9[204743]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:51 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Dec 2 04:30:51 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 2 04:30:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35829 DF PROTO=TCP SPT=44798 DPT=9101 SEQ=1382640492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47961B400000000001030307) Dec 2 04:30:54 localhost python3.9[205080]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:55 localhost python3.9[205190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35831 DF PROTO=TCP SPT=44798 DPT=9101 SEQ=1382640492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479627640000000001030307) Dec 2 04:30:55 localhost python3.9[205278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667854.646691-3336-142424303568259/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:57 localhost python3.9[205388]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:57 localhost python3.9[205498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:58 localhost python3.9[205555]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:30:59 localhost python3.9[205665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:30:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35832 DF PROTO=TCP SPT=44798 DPT=9101 SEQ=1382640492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479637240000000001030307) Dec 2 04:30:59 localhost python3.9[205722]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x0dw1ram recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:00 localhost python3.9[205832]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:00 localhost python3.9[205889]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:01 localhost python3.9[205999]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:02 localhost python3[206110]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 2 04:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:31:02 localhost podman[206221]: 2025-12-02 09:31:02.843460576 +0000 UTC m=+0.089132549 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 04:31:02 localhost podman[206221]: 2025-12-02 09:31:02.890193998 +0000 UTC m=+0.135866001 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:31:02 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:31:02 localhost python3.9[206220]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:31:03.009 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:31:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:31:03.010 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:31:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:31:03.011 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:31:03 localhost python3.9[206302]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4948 DF PROTO=TCP SPT=37648 DPT=9102 SEQ=72312645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479648FD0000000001030307) Dec 2 04:31:04 localhost python3.9[206412]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12827 DF PROTO=TCP SPT=59402 DPT=9105 SEQ=3264827158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796497F0000000001030307) Dec 2 04:31:04 localhost python3.9[206469]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:05 localhost python3.9[206579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:05 localhost python3.9[206636]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:06 localhost python3.9[206746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4950 DF PROTO=TCP SPT=37648 DPT=9102 SEQ=72312645 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479655240000000001030307) Dec 2 04:31:07 localhost python3.9[206803]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:07 localhost python3.9[206913]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:31:08 localhost podman[207004]: 2025-12-02 09:31:08.302188707 +0000 UTC m=+0.089592332 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 04:31:08 localhost podman[207004]: 2025-12-02 09:31:08.333504569 +0000 UTC m=+0.120908184 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:31:08 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:31:08 localhost python3.9[207003]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667867.3709314-3711-118293261624326/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:09 localhost python3.9[207131]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:09 localhost python3.9[207241]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61262 DF PROTO=TCP SPT=49304 DPT=9100 SEQ=1414406547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479660E40000000001030307) Dec 2 04:31:11 localhost python3.9[207354]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:11 localhost python3.9[207464]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:12 localhost python3.9[207623]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42393 DF PROTO=TCP SPT=43504 DPT=9882 SEQ=1227093900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47966D240000000001030307) Dec 2 04:31:13 localhost python3.9[207754]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:14 localhost python3.9[207885]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:15 localhost python3.9[207995]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:15 localhost python3.9[208083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667874.5003877-3927-60140911227733/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61264 DF PROTO=TCP SPT=49304 DPT=9100 SEQ=1414406547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479678A40000000001030307) Dec 2 04:31:16 localhost python3.9[208193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:16 localhost python3.9[208281]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667875.881825-3972-145378283632900/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:17 localhost python3.9[208391]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:31:18 localhost python3.9[208479]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667877.1341233-4017-270469235903438/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:18 localhost python3.9[208589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:18 localhost systemd[1]: Reloading. Dec 2 04:31:18 localhost systemd-sysv-generator[208617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:18 localhost systemd-rc-local-generator[208613]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12831 DF PROTO=TCP SPT=59402 DPT=9105 SEQ=3264827158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479685E40000000001030307) Dec 2 04:31:20 localhost systemd[1]: Reached target edpm_libvirt.target. Dec 2 04:31:21 localhost python3.9[208739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 2 04:31:21 localhost systemd[1]: Reloading. Dec 2 04:31:21 localhost systemd-rc-local-generator[208767]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:21 localhost systemd-sysv-generator[208770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64496 DF PROTO=TCP SPT=50994 DPT=9101 SEQ=1619115592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479690700000000001030307) Dec 2 04:31:23 localhost systemd[1]: Reloading. Dec 2 04:31:23 localhost systemd-rc-local-generator[208800]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:23 localhost systemd-sysv-generator[208804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:23 localhost systemd[1]: session-53.scope: Deactivated successfully. Dec 2 04:31:23 localhost systemd[1]: session-53.scope: Consumed 3min 53.501s CPU time. Dec 2 04:31:23 localhost systemd-logind[757]: Session 53 logged out. Waiting for processes to exit. Dec 2 04:31:23 localhost systemd-logind[757]: Removed session 53. Dec 2 04:31:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64498 DF PROTO=TCP SPT=50994 DPT=9101 SEQ=1619115592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47969C640000000001030307) Dec 2 04:31:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64499 DF PROTO=TCP SPT=50994 DPT=9101 SEQ=1619115592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796AC240000000001030307) Dec 2 04:31:29 localhost sshd[208830]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:31:29 localhost systemd-logind[757]: New session 54 of user zuul. Dec 2 04:31:29 localhost systemd[1]: Started Session 54 of User zuul. Dec 2 04:31:30 localhost python3.9[208941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:31:31 localhost python3.9[209053]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:31:31 localhost network[209070]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:31:31 localhost network[209071]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:31:31 localhost network[209072]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:31:33 localhost systemd[1]: tmp-crun.4LaZ4V.mount: Deactivated successfully. Dec 2 04:31:33 localhost podman[209103]: 2025-12-02 09:31:33.06019302 +0000 UTC m=+0.094831282 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 04:31:33 localhost podman[209103]: 2025-12-02 09:31:33.13070344 +0000 UTC m=+0.165341732 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:31:33 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:31:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38915 DF PROTO=TCP SPT=43762 DPT=9102 SEQ=1342219773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796BE2E0000000001030307) Dec 2 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11970 DF PROTO=TCP SPT=55084 DPT=9105 SEQ=4004772905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796BEAF0000000001030307) Dec 2 04:31:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38917 DF PROTO=TCP SPT=43762 DPT=9102 SEQ=1342219773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796CA250000000001030307) Dec 2 04:31:37 localhost python3.9[209332]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:31:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:31:39 localhost systemd[1]: tmp-crun.V4UZ8y.mount: Deactivated successfully. Dec 2 04:31:39 localhost podman[209396]: 2025-12-02 09:31:39.047083192 +0000 UTC m=+0.079758708 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 04:31:39 localhost podman[209396]: 2025-12-02 09:31:39.05222942 +0000 UTC m=+0.084904976 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:31:39 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:31:39 localhost python3.9[209395]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:31:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40773 DF PROTO=TCP SPT=51764 DPT=9882 SEQ=1877533652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796D5E40000000001030307) Dec 2 04:31:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41813 DF PROTO=TCP SPT=50554 DPT=9100 SEQ=1535800412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796E1E50000000001030307) Dec 2 04:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56853 DF PROTO=TCP SPT=59142 DPT=9100 SEQ=2753570109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796EDE40000000001030307) Dec 2 04:31:47 localhost python3.9[209525]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11974 DF PROTO=TCP SPT=55084 DPT=9105 SEQ=4004772905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4796F9FD0000000001030307) Dec 2 04:31:49 localhost python3.9[209637]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:50 localhost python3.9[209747]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:51 localhost python3.9[209858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:52 localhost python3.9[209969]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:31:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62804 DF PROTO=TCP SPT=37640 DPT=9101 SEQ=4112707506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797059F0000000001030307) Dec 2 04:31:52 localhost python3.9[210080]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:31:53 localhost python3.9[210192]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:31:55 localhost python3.9[210302]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62806 DF PROTO=TCP SPT=37640 DPT=9101 SEQ=4112707506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479711A40000000001030307) Dec 2 04:31:55 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Dec 2 04:31:56 localhost python3.9[210416]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:31:56 localhost systemd[1]: Reloading. Dec 2 04:31:56 localhost systemd-sysv-generator[210448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:31:56 localhost systemd-rc-local-generator[210442]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:31:56 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Dec 2 04:31:56 localhost systemd[1]: Starting Open-iSCSI... Dec 2 04:31:56 localhost iscsid[210457]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 2 04:31:56 localhost iscsid[210457]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 2 04:31:56 localhost iscsid[210457]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 2 04:31:56 localhost iscsid[210457]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 2 04:31:56 localhost iscsid[210457]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 2 04:31:56 localhost iscsid[210457]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 2 04:31:56 localhost iscsid[210457]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Dec 2 04:31:56 localhost systemd[1]: Started Open-iSCSI. Dec 2 04:31:56 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Dec 2 04:31:56 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Dec 2 04:31:58 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 2 04:31:58 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 2 04:31:58 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service. Dec 2 04:31:58 localhost python3.9[210569]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:31:58 localhost network[210599]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:31:58 localhost network[210600]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:31:58 localhost network[210601]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:31:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62807 DF PROTO=TCP SPT=37640 DPT=9101 SEQ=4112707506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479721650000000001030307) Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 53657d6c-c100-406f-a5c8-7ed1309fb42f Dec 2 04:31:59 localhost setroubleshoot[210489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 2 04:32:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:32:03.013 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:32:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:32:03.017 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:32:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:32:03.020 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:32:03 localhost podman[210694]: 2025-12-02 09:32:03.287963058 +0000 UTC m=+0.101063129 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 2 04:32:03 localhost podman[210694]: 2025-12-02 09:32:03.360129712 +0000 UTC m=+0.173229773 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:32:03 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:32:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41133 DF PROTO=TCP SPT=52974 DPT=9102 SEQ=76396377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797335E0000000001030307) Dec 2 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48145 DF PROTO=TCP SPT=44198 DPT=9105 SEQ=2160779315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479733DF0000000001030307) Dec 2 04:32:06 localhost python3.9[210858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:32:06 localhost python3.9[210968]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 2 04:32:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41135 DF PROTO=TCP SPT=52974 DPT=9102 SEQ=76396377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47973F650000000001030307) Dec 2 04:32:07 localhost python3.9[211082]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:08 localhost python3.9[211170]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667927.1810136-456-6334875595177/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:09 localhost python3.9[211280]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:32:09 localhost podman[211298]: 2025-12-02 09:32:09.458364318 +0000 UTC m=+0.094624186 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 04:32:09 localhost podman[211298]: 2025-12-02 09:32:09.494445285 +0000 UTC m=+0.130705133 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:32:09 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:32:09 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service: Deactivated successfully. Dec 2 04:32:09 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 2 04:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25210 DF PROTO=TCP SPT=42364 DPT=9100 SEQ=2668511031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47974B650000000001030307) Dec 2 04:32:10 localhost python3.9[211408]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:32:10 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:32:10 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:32:10 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:32:10 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:32:10 localhost systemd-modules-load[211412]: Module 'msr' is built in Dec 2 04:32:10 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:32:11 localhost python3.9[211522]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:12 localhost python3.9[211632]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:12 localhost python3.9[211742]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40252 DF PROTO=TCP SPT=40544 DPT=9882 SEQ=335701621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479757640000000001030307) Dec 2 04:32:13 localhost python3.9[211852]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:13 localhost python3.9[211940]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667932.9422061-630-53145120483686/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:14 localhost python3.9[212126]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:32:14 localhost podman[212159]: 2025-12-02 09:32:14.701112029 +0000 UTC m=+0.090323201 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Dec 2 04:32:14 localhost podman[212159]: 2025-12-02 09:32:14.798027166 +0000 UTC m=+0.187238438 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:32:15 localhost python3.9[212351]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25212 DF PROTO=TCP SPT=42364 DPT=9100 SEQ=2668511031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479763240000000001030307) Dec 2 04:32:16 localhost python3.9[212512]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:16 localhost python3.9[212640]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:18 localhost python3.9[212751]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:18 localhost python3.9[212861]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48149 DF PROTO=TCP SPT=44198 DPT=9105 SEQ=2160779315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47976FE40000000001030307) Dec 2 04:32:19 localhost python3.9[212971]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:20 localhost python3.9[213081]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:20 localhost python3.9[213191]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:21 localhost python3.9[213303]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57613 DF PROTO=TCP SPT=36730 DPT=9101 SEQ=3065102644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47977AD10000000001030307) Dec 2 04:32:22 localhost python3.9[213413]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:23 localhost python3.9[213523]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:23 localhost python3.9[213580]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:24 localhost python3.9[213690]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:24 localhost python3.9[213747]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57615 DF PROTO=TCP SPT=36730 DPT=9101 SEQ=3065102644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479786E40000000001030307) Dec 2 04:32:25 localhost python3.9[213857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:26 localhost python3.9[213967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:26 localhost python3.9[214024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:27 localhost python3.9[214134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:28 localhost python3.9[214191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:29 localhost python3.9[214301]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:29 localhost systemd[1]: Reloading. Dec 2 04:32:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57616 DF PROTO=TCP SPT=36730 DPT=9101 SEQ=3065102644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479796A40000000001030307) Dec 2 04:32:29 localhost systemd-rc-local-generator[214330]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:29 localhost systemd-sysv-generator[214333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:29 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:31 localhost python3.9[214450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:31 localhost python3.9[214507]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:32 localhost python3.9[214617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:32 localhost python3.9[214674]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:32:33 localhost podman[214785]: 2025-12-02 09:32:33.588033723 +0000 UTC m=+0.091858602 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:32:33 localhost podman[214785]: 2025-12-02 09:32:33.64316372 +0000 UTC m=+0.146988609 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:32:33 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:32:33 localhost python3.9[214784]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:33 localhost systemd[1]: Reloading. Dec 2 04:32:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31116 DF PROTO=TCP SPT=40090 DPT=9102 SEQ=661332112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797A88D0000000001030307) Dec 2 04:32:33 localhost systemd-rc-local-generator[214836]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:33 localhost systemd-sysv-generator[214839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:34 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64292 DF PROTO=TCP SPT=57484 DPT=9105 SEQ=3569643623 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797A90F0000000001030307) Dec 2 04:32:34 localhost systemd[1]: Starting Create netns directory... Dec 2 04:32:34 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:32:34 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:32:34 localhost systemd[1]: Finished Create netns directory. Dec 2 04:32:35 localhost python3.9[214960]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:35 localhost python3.9[215070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:36 localhost python3.9[215158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667955.2667012-1251-2359701460469/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31118 DF PROTO=TCP SPT=40090 DPT=9102 SEQ=661332112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797B4A40000000001030307) Dec 2 04:32:37 localhost python3.9[215268]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:32:37 localhost python3.9[215378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:32:38 localhost python3.9[215466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667957.5600255-1326-218548267908060/.source.json _original_basename=.uo38n_h8 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:38 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Dec 2 04:32:39 localhost python3.9[215577]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:39 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Dec 2 04:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:32:39 localhost podman[215650]: 2025-12-02 09:32:39.799965501 +0000 UTC m=+0.093629702 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent) Dec 2 04:32:39 localhost podman[215650]: 2025-12-02 09:32:39.811138253 +0000 UTC m=+0.104802454 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:32:39 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52890 DF PROTO=TCP SPT=56238 DPT=9100 SEQ=3280184966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797C0A40000000001030307) Dec 2 04:32:41 localhost python3.9[215904]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 2 04:32:42 localhost python3.9[216014]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:32:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56856 DF PROTO=TCP SPT=59142 DPT=9100 SEQ=2753570109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797CBE40000000001030307) Dec 2 04:32:43 localhost python3.9[216124]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52892 DF PROTO=TCP SPT=56238 DPT=9100 SEQ=3280184966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797D8640000000001030307) Dec 2 04:32:47 localhost python3[216262]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:32:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31120 DF PROTO=TCP SPT=40090 DPT=9102 SEQ=661332112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797E3E40000000001030307) Dec 2 04:32:50 localhost podman[216275]: 2025-12-02 09:32:48.046984353 +0000 UTC m=+0.047470394 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:50 localhost podman[216323]: Dec 2 04:32:50 localhost podman[216323]: 2025-12-02 09:32:50.309809698 +0000 UTC m=+0.094155565 container create f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 04:32:50 localhost podman[216323]: 2025-12-02 09:32:50.265577933 +0000 UTC m=+0.049923860 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:50 localhost python3[216262]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 2 04:32:51 localhost python3.9[216471]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37304 DF PROTO=TCP SPT=36884 DPT=9101 SEQ=72661203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797F0010000000001030307) Dec 2 04:32:52 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 2 04:32:52 localhost python3.9[216584]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:53 localhost python3.9[216639]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:54 localhost python3.9[216748]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667973.686059-1590-16600817307411/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:32:55 localhost python3.9[216803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:32:55 localhost systemd[1]: Reloading. Dec 2 04:32:55 localhost systemd-rc-local-generator[216825]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:55 localhost systemd-sysv-generator[216832]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37306 DF PROTO=TCP SPT=36884 DPT=9101 SEQ=72661203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4797FC240000000001030307) Dec 2 04:32:56 localhost python3.9[216894]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:32:56 localhost systemd[1]: Reloading. Dec 2 04:32:56 localhost systemd-rc-local-generator[216923]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:32:56 localhost systemd-sysv-generator[216926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:32:56 localhost systemd[1]: Starting multipathd container... Dec 2 04:32:56 localhost systemd[1]: Started libcrun container. Dec 2 04:32:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc3fae46c177fc07a98939101be81ba2acbbce5cd8ac84de4a05d1c252d1c6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:32:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc3fae46c177fc07a98939101be81ba2acbbce5cd8ac84de4a05d1c252d1c6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:32:56 localhost podman[216935]: 2025-12-02 09:32:56.52280793 +0000 UTC m=+0.135434002 container init f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:32:56 localhost multipathd[216950]: + sudo -E kolla_set_configs Dec 2 04:32:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:32:56 localhost podman[216935]: 2025-12-02 09:32:56.568236417 +0000 UTC m=+0.180862499 container start f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 04:32:56 localhost podman[216935]: multipathd Dec 2 04:32:56 localhost systemd[1]: Started multipathd container. Dec 2 04:32:56 localhost multipathd[216950]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:32:56 localhost multipathd[216950]: INFO:__main__:Validating config file Dec 2 04:32:56 localhost multipathd[216950]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:32:56 localhost multipathd[216950]: INFO:__main__:Writing out command to execute Dec 2 04:32:56 localhost multipathd[216950]: ++ cat /run_command Dec 2 04:32:56 localhost multipathd[216950]: + CMD='/usr/sbin/multipathd -d' Dec 2 04:32:56 localhost multipathd[216950]: + ARGS= Dec 2 04:32:56 localhost multipathd[216950]: + sudo kolla_copy_cacerts Dec 2 04:32:56 localhost podman[216959]: 2025-12-02 09:32:56.637960512 +0000 UTC m=+0.066497488 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:32:56 localhost multipathd[216950]: + [[ ! -n '' ]] Dec 2 04:32:56 localhost multipathd[216950]: + . kolla_extend_start Dec 2 04:32:56 localhost multipathd[216950]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 2 04:32:56 localhost multipathd[216950]: Running command: '/usr/sbin/multipathd -d' Dec 2 04:32:56 localhost multipathd[216950]: + umask 0022 Dec 2 04:32:56 localhost multipathd[216950]: + exec /usr/sbin/multipathd -d Dec 2 04:32:56 localhost multipathd[216950]: 10138.869656 | --------start up-------- Dec 2 04:32:56 localhost multipathd[216950]: 10138.869677 | read /etc/multipath.conf Dec 2 04:32:56 localhost podman[216959]: 2025-12-02 09:32:56.651959761 +0000 UTC m=+0.080496687 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:32:56 localhost multipathd[216950]: 10138.873280 | path checkers start up Dec 2 04:32:56 localhost podman[216959]: unhealthy Dec 2 04:32:56 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:32:56 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Failed with result 'exit-code'. Dec 2 04:32:57 localhost python3.9[217096]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:32:58 localhost python3.9[217208]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:32:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37307 DF PROTO=TCP SPT=36884 DPT=9101 SEQ=72661203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47980BE50000000001030307) Dec 2 04:32:59 localhost python3.9[217331]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:32:59 localhost systemd[1]: Stopping multipathd container... Dec 2 04:32:59 localhost multipathd[216950]: 10141.879531 | exit (signal) Dec 2 04:32:59 localhost multipathd[216950]: 10141.879903 | --------shut down------- Dec 2 04:32:59 localhost systemd[1]: libpod-f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.scope: Deactivated successfully. Dec 2 04:32:59 localhost podman[217335]: 2025-12-02 09:32:59.691907193 +0000 UTC m=+0.069361586 container died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 2 04:32:59 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.timer: Deactivated successfully. Dec 2 04:32:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a-userdata-shm.mount: Deactivated successfully. Dec 2 04:32:59 localhost systemd[1]: var-lib-containers-storage-overlay-decc3fae46c177fc07a98939101be81ba2acbbce5cd8ac84de4a05d1c252d1c6-merged.mount: Deactivated successfully. Dec 2 04:33:00 localhost podman[217335]: 2025-12-02 09:33:00.442828921 +0000 UTC m=+0.820283254 container cleanup f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 04:33:00 localhost podman[217335]: multipathd Dec 2 04:33:00 localhost podman[217364]: 2025-12-02 09:33:00.544050626 +0000 UTC m=+0.068103532 container cleanup f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:33:00 localhost podman[217364]: multipathd Dec 2 04:33:00 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Dec 2 04:33:00 localhost systemd[1]: Stopped multipathd container. Dec 2 04:33:00 localhost systemd[1]: Starting multipathd container... Dec 2 04:33:00 localhost systemd[1]: tmp-crun.kLfc3l.mount: Deactivated successfully. Dec 2 04:33:00 localhost systemd[1]: Started libcrun container. Dec 2 04:33:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc3fae46c177fc07a98939101be81ba2acbbce5cd8ac84de4a05d1c252d1c6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:33:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/decc3fae46c177fc07a98939101be81ba2acbbce5cd8ac84de4a05d1c252d1c6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:33:00 localhost podman[217377]: 2025-12-02 09:33:00.881821856 +0000 UTC m=+0.299083915 container init f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS) Dec 2 04:33:00 localhost multipathd[217390]: + sudo -E kolla_set_configs Dec 2 04:33:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:33:00 localhost podman[217377]: 2025-12-02 09:33:00.923312168 +0000 UTC m=+0.340574207 container start f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 04:33:00 localhost podman[217377]: multipathd Dec 2 04:33:00 localhost systemd[1]: Started multipathd container. Dec 2 04:33:00 localhost multipathd[217390]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:33:00 localhost multipathd[217390]: INFO:__main__:Validating config file Dec 2 04:33:00 localhost multipathd[217390]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:33:00 localhost multipathd[217390]: INFO:__main__:Writing out command to execute Dec 2 04:33:00 localhost multipathd[217390]: ++ cat /run_command Dec 2 04:33:00 localhost multipathd[217390]: + CMD='/usr/sbin/multipathd -d' Dec 2 04:33:00 localhost multipathd[217390]: + ARGS= Dec 2 04:33:00 localhost multipathd[217390]: + sudo kolla_copy_cacerts Dec 2 04:33:01 localhost multipathd[217390]: + [[ ! -n '' ]] Dec 2 04:33:01 localhost multipathd[217390]: + . kolla_extend_start Dec 2 04:33:01 localhost multipathd[217390]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 2 04:33:01 localhost multipathd[217390]: Running command: '/usr/sbin/multipathd -d' Dec 2 04:33:01 localhost multipathd[217390]: + umask 0022 Dec 2 04:33:01 localhost multipathd[217390]: + exec /usr/sbin/multipathd -d Dec 2 04:33:01 localhost multipathd[217390]: 10143.230588 | --------start up-------- Dec 2 04:33:01 localhost multipathd[217390]: 10143.230645 | read /etc/multipath.conf Dec 2 04:33:01 localhost multipathd[217390]: 10143.235825 | path checkers start up Dec 2 04:33:01 localhost podman[217398]: 2025-12-02 09:33:01.050231788 +0000 UTC m=+0.116036547 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:33:01 localhost podman[217398]: 2025-12-02 09:33:01.060985779 +0000 UTC m=+0.126790528 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:33:01 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:33:01 localhost python3.9[217538]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:33:03.014 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:33:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:33:03.015 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:33:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:33:03.018 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:33:03 localhost python3.9[217648]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:33:03 localhost python3.9[217758]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 2 04:33:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:33:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20376 DF PROTO=TCP SPT=54526 DPT=9102 SEQ=964100235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47981DBE0000000001030307) Dec 2 04:33:03 localhost podman[217762]: 2025-12-02 09:33:03.983245949 +0000 UTC m=+0.086823858 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 04:33:04 localhost podman[217762]: 2025-12-02 09:33:04.028312647 +0000 UTC m=+0.131890606 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:33:04 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40803 DF PROTO=TCP SPT=33038 DPT=9105 SEQ=1315420248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47981E3F0000000001030307) Dec 2 04:33:04 localhost python3.9[217902]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:33:05 localhost python3.9[217990]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667984.2512019-1830-138597268523202/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:06 localhost python3.9[218100]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:06 localhost python3.9[218210]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:33:06 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 2 04:33:06 localhost systemd[1]: Stopped Load Kernel Modules. Dec 2 04:33:06 localhost systemd[1]: Stopping Load Kernel Modules... Dec 2 04:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20378 DF PROTO=TCP SPT=54526 DPT=9102 SEQ=964100235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479829E40000000001030307) Dec 2 04:33:07 localhost systemd[1]: Starting Load Kernel Modules... Dec 2 04:33:07 localhost systemd-modules-load[218214]: Module 'msr' is built in Dec 2 04:33:07 localhost systemd[1]: Finished Load Kernel Modules. Dec 2 04:33:07 localhost python3.9[218324]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:33:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13959 DF PROTO=TCP SPT=47048 DPT=9100 SEQ=2305447161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479835A40000000001030307) Dec 2 04:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:33:10 localhost systemd[1]: tmp-crun.whhTqv.mount: Deactivated successfully. Dec 2 04:33:10 localhost podman[218327]: 2025-12-02 09:33:10.456482005 +0000 UTC m=+0.091202266 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:33:10 localhost podman[218327]: 2025-12-02 09:33:10.491115311 +0000 UTC m=+0.125835582 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:33:10 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:33:11 localhost systemd[1]: Reloading. Dec 2 04:33:11 localhost systemd-sysv-generator[218377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:11 localhost systemd-rc-local-generator[218372]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: Reloading. Dec 2 04:33:12 localhost systemd-rc-local-generator[218414]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:12 localhost systemd-sysv-generator[218417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd-logind[757]: Watching system buttons on /dev/input/event0 (Power Button) Dec 2 04:33:12 localhost systemd-logind[757]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 2 04:33:12 localhost lvm[218464]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 2 04:33:12 localhost lvm[218464]: VG ceph_vg0 finished Dec 2 04:33:12 localhost lvm[218463]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 2 04:33:12 localhost lvm[218463]: VG ceph_vg1 finished Dec 2 04:33:12 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 2 04:33:12 localhost systemd[1]: Starting man-db-cache-update.service... Dec 2 04:33:12 localhost systemd[1]: Reloading. Dec 2 04:33:12 localhost systemd-rc-local-generator[218516]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:12 localhost systemd-sysv-generator[218519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25151 DF PROTO=TCP SPT=39096 DPT=9882 SEQ=455505681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479841E40000000001030307) Dec 2 04:33:13 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 2 04:33:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 2 04:33:14 localhost systemd[1]: Finished man-db-cache-update.service. Dec 2 04:33:14 localhost systemd[1]: man-db-cache-update.service: Consumed 1.502s CPU time. Dec 2 04:33:14 localhost systemd[1]: run-r58bd866ee1894a2fa6157db2b5d4183e.service: Deactivated successfully. Dec 2 04:33:15 localhost python3.9[219759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13961 DF PROTO=TCP SPT=47048 DPT=9100 SEQ=2305447161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47984D640000000001030307) Dec 2 04:33:16 localhost python3.9[219873]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:17 localhost python3.9[220051]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:33:17 localhost systemd[1]: Reloading. Dec 2 04:33:17 localhost systemd-rc-local-generator[220076]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:17 localhost systemd-sysv-generator[220079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:18 localhost python3.9[220213]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:33:18 localhost network[220230]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:33:18 localhost network[220231]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:33:18 localhost network[220232]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:33:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40807 DF PROTO=TCP SPT=33038 DPT=9105 SEQ=1315420248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479859E50000000001030307) Dec 2 04:33:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63941 DF PROTO=TCP SPT=43906 DPT=9101 SEQ=1657305403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479865300000000001030307) Dec 2 04:33:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63943 DF PROTO=TCP SPT=43906 DPT=9101 SEQ=1657305403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479871240000000001030307) Dec 2 04:33:26 localhost python3.9[220467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:27 localhost python3.9[220578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:28 localhost python3.9[220689]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:28 localhost python3.9[220800]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63944 DF PROTO=TCP SPT=43906 DPT=9101 SEQ=1657305403 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479880E40000000001030307) Dec 2 04:33:29 localhost python3.9[220911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:30 localhost python3.9[221022]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:30 localhost python3.9[221133]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:33:31 localhost systemd[1]: tmp-crun.zzbcnZ.mount: Deactivated successfully. Dec 2 04:33:31 localhost podman[221224]: 2025-12-02 09:33:31.463232972 +0000 UTC m=+0.095120532 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd) Dec 2 04:33:31 localhost podman[221224]: 2025-12-02 09:33:31.504214489 +0000 UTC m=+0.136102079 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:33:31 localhost python3.9[221255]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:33:31 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:33:33 localhost python3.9[221374]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37327 DF PROTO=TCP SPT=60628 DPT=9102 SEQ=1845266549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479892ED0000000001030307) Dec 2 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13173 DF PROTO=TCP SPT=34802 DPT=9105 SEQ=331962085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798936E0000000001030307) Dec 2 04:33:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:33:34 localhost systemd[1]: tmp-crun.nLoHAX.mount: Deactivated successfully. Dec 2 04:33:34 localhost podman[221485]: 2025-12-02 09:33:34.398193844 +0000 UTC m=+0.092020127 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:33:34 localhost podman[221485]: 2025-12-02 09:33:34.475088014 +0000 UTC m=+0.168914367 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:33:34 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:33:34 localhost python3.9[221484]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:35 localhost python3.9[221619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:36 localhost python3.9[221729]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:36 localhost python3.9[221839]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37329 DF PROTO=TCP SPT=60628 DPT=9102 SEQ=1845266549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47989EE40000000001030307) Dec 2 04:33:37 localhost python3.9[221949]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:38 localhost python3.9[222059]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:39 localhost python3.9[222169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:39 localhost python3.9[222279]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37677 DF PROTO=TCP SPT=44624 DPT=9100 SEQ=2140714318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798AAE40000000001030307) Dec 2 04:33:40 localhost python3.9[222389]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:33:41 localhost systemd[1]: tmp-crun.oDkP9v.mount: Deactivated successfully. Dec 2 04:33:41 localhost podman[222500]: 2025-12-02 09:33:41.097023516 +0000 UTC m=+0.115192440 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 2 04:33:41 localhost podman[222500]: 2025-12-02 09:33:41.104422364 +0000 UTC m=+0.122591288 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:33:41 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:33:41 localhost python3.9[222499]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:41 localhost python3.9[222627]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:42 localhost python3.9[222737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:43 localhost python3.9[222847]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19351 DF PROTO=TCP SPT=49686 DPT=9882 SEQ=2656160242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798B6E40000000001030307) Dec 2 04:33:43 localhost python3.9[222957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:44 localhost python3.9[223067]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:33:45 localhost python3.9[223177]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37679 DF PROTO=TCP SPT=44624 DPT=9100 SEQ=2140714318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798C2A40000000001030307) Dec 2 04:33:46 localhost python3.9[223287]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:33:47 localhost python3.9[223397]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:33:47 localhost systemd[1]: Reloading. Dec 2 04:33:47 localhost systemd-rc-local-generator[223423]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:33:47 localhost systemd-sysv-generator[223428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:33:48 localhost python3.9[223544]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:49 localhost python3.9[223655]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37331 DF PROTO=TCP SPT=60628 DPT=9102 SEQ=1845266549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798CFE40000000001030307) Dec 2 04:33:50 localhost python3.9[223766]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:50 localhost python3.9[223877]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:51 localhost python3.9[223988]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:51 localhost python3.9[224099]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41785 DF PROTO=TCP SPT=42006 DPT=9101 SEQ=2616569565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798DA600000000001030307) Dec 2 04:33:52 localhost python3.9[224210]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:54 localhost python3.9[224321]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:33:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41787 DF PROTO=TCP SPT=42006 DPT=9101 SEQ=2616569565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798E6640000000001030307) Dec 2 04:33:57 localhost python3.9[224432]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:57 localhost python3.9[224542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:59 localhost python3.9[224652]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:33:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41788 DF PROTO=TCP SPT=42006 DPT=9101 SEQ=2616569565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4798F6240000000001030307) Dec 2 04:33:59 localhost python3.9[224762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:00 localhost python3.9[224872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:00 localhost python3.9[224982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:01 localhost python3.9[225092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:34:02 localhost systemd[1]: tmp-crun.CJ0vaS.mount: Deactivated successfully. Dec 2 04:34:02 localhost podman[225203]: 2025-12-02 09:34:02.143303038 +0000 UTC m=+0.103895116 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:34:02 localhost podman[225203]: 2025-12-02 09:34:02.158902857 +0000 UTC m=+0.119494955 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 04:34:02 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:34:02 localhost python3.9[225202]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:02 localhost python3.9[225331]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:34:03.015 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:34:03.016 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:34:03.017 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:03 localhost python3.9[225441]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54121 DF PROTO=TCP SPT=45392 DPT=9102 SEQ=1991185936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799081D0000000001030307) Dec 2 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=317 DF PROTO=TCP SPT=36452 DPT=9105 SEQ=2136280617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799089E0000000001030307) Dec 2 04:34:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:34:05 localhost podman[225459]: 2025-12-02 09:34:05.445197831 +0000 UTC m=+0.087262248 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true) Dec 2 04:34:05 localhost podman[225459]: 2025-12-02 09:34:05.483213684 +0000 UTC m=+0.125278081 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:34:05 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:34:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54123 DF PROTO=TCP SPT=45392 DPT=9102 SEQ=1991185936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479914240000000001030307) Dec 2 04:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25154 DF PROTO=TCP SPT=39096 DPT=9882 SEQ=455505681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47991FE50000000001030307) Dec 2 04:34:10 localhost python3.9[225576]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 2 04:34:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:34:11 localhost podman[225633]: 2025-12-02 09:34:11.544026368 +0000 UTC m=+0.188366008 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:34:11 localhost podman[225633]: 2025-12-02 09:34:11.581200709 +0000 UTC m=+0.225540359 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 04:34:11 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:34:11 localhost python3.9[225704]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:34:12 localhost python3.9[225820]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:34:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13964 DF PROTO=TCP SPT=47048 DPT=9100 SEQ=2305447161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47992BE40000000001030307) Dec 2 04:34:13 localhost sshd[225846]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:34:13 localhost systemd-logind[757]: New session 55 of user zuul. Dec 2 04:34:13 localhost systemd[1]: Started Session 55 of User zuul. Dec 2 04:34:13 localhost systemd[1]: session-55.scope: Deactivated successfully. Dec 2 04:34:13 localhost systemd-logind[757]: Session 55 logged out. Waiting for processes to exit. Dec 2 04:34:13 localhost systemd-logind[757]: Removed session 55. Dec 2 04:34:14 localhost python3.9[225957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:15 localhost python3.9[226043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668054.0983036-3389-49370459733825/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:15 localhost python3.9[226151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:16 localhost python3.9[226206]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23696 DF PROTO=TCP SPT=58080 DPT=9100 SEQ=3870753883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479937E50000000001030307) Dec 2 04:34:16 localhost python3.9[226314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:17 localhost python3.9[226400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668056.167528-3389-212269439790116/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:17 localhost python3.9[226508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:18 localhost python3.9[226594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668057.2044785-3389-181542399892225/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=be0176be25a535cff695cce5406adb3d3b53bef4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:18 localhost python3.9[226738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:19 localhost python3.9[226844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668058.265475-3389-241886877258445/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=321 DF PROTO=TCP SPT=36452 DPT=9105 SEQ=2136280617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479943E40000000001030307) Dec 2 04:34:19 localhost python3.9[226972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:20 localhost python3.9[227068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668059.3430681-3389-223376930207811/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:21 localhost python3.9[227178]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:21 localhost python3.9[227288]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39386 DF PROTO=TCP SPT=35254 DPT=9101 SEQ=4052270145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47994F900000000001030307) Dec 2 04:34:22 localhost python3.9[227398]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:23 localhost python3.9[227510]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:23 localhost python3.9[227618]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:24 localhost python3.9[227728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:25 localhost python3.9[227814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668064.003581-3765-162382892498878/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39388 DF PROTO=TCP SPT=35254 DPT=9101 SEQ=4052270145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47995BA40000000001030307) Dec 2 04:34:25 localhost python3.9[227922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:34:26 localhost python3.9[228008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668065.2029831-3809-229375328704837/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:34:27 localhost python3.9[228118]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 2 04:34:27 localhost python3.9[228228]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:34:28 localhost python3[228338]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:34:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39389 DF PROTO=TCP SPT=35254 DPT=9101 SEQ=4052270145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47996B650000000001030307) Dec 2 04:34:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:34:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7342 DF PROTO=TCP SPT=58180 DPT=9102 SEQ=891547760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47997D4D0000000001030307) Dec 2 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43783 DF PROTO=TCP SPT=37720 DPT=9105 SEQ=501411175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47997DCF0000000001030307) Dec 2 04:34:34 localhost podman[228376]: 2025-12-02 09:34:34.934081685 +0000 UTC m=+2.720192549 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:34:34 localhost podman[228376]: 2025-12-02 09:34:34.974194564 +0000 UTC m=+2.760305408 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:34:34 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:34:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:34:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7344 DF PROTO=TCP SPT=58180 DPT=9102 SEQ=891547760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479989640000000001030307) Dec 2 04:34:39 localhost podman[228409]: 2025-12-02 09:34:39.994664433 +0000 UTC m=+3.582532093 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53344 DF PROTO=TCP SPT=45828 DPT=9100 SEQ=1112148696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479995240000000001030307) Dec 2 04:34:40 localhost podman[228352]: 2025-12-02 09:34:28.95949298 +0000 UTC m=+0.052000100 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:40 localhost podman[228409]: 2025-12-02 09:34:40.056851826 +0000 UTC m=+3.644719456 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:34:40 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:34:40 localhost podman[228456]: Dec 2 04:34:40 localhost podman[228456]: 2025-12-02 09:34:40.244224596 +0000 UTC m=+0.086879568 container create ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 04:34:40 localhost podman[228456]: 2025-12-02 09:34:40.204845957 +0000 UTC m=+0.047500959 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:40 localhost python3[228338]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Dec 2 04:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:34:42 localhost podman[228511]: 2025-12-02 09:34:42.450353258 +0000 UTC m=+0.087846735 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:34:42 localhost podman[228511]: 2025-12-02 09:34:42.457818428 +0000 UTC m=+0.095311965 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Dec 2 04:34:42 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:34:43 localhost python3.9[228620]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47240 DF PROTO=TCP SPT=40726 DPT=9882 SEQ=887817473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799A1640000000001030307) Dec 2 04:34:44 localhost python3.9[228732]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 2 04:34:44 localhost python3.9[228842]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:34:45 localhost python3[228952]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53346 DF PROTO=TCP SPT=45828 DPT=9100 SEQ=1112148696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799ACE50000000001030307) Dec 2 04:34:46 localhost python3[228952]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:34:46 localhost podman[229004]: 2025-12-02 09:34:46.32529686 +0000 UTC m=+0.098270966 container remove 1a8728c4e99f56ba35b80d741f9a9c3570dc11229db60a0ffdbea5a9022358e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '230f4ebc92ecc6f511b0217abb58f1b6-ff8ff724cb5f0d02131158e2fae849b6'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 2 04:34:46 localhost python3[228952]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Dec 2 04:34:46 localhost podman[229018]: Dec 2 04:34:46 localhost podman[229018]: 2025-12-02 09:34:46.43513596 +0000 UTC m=+0.092606282 container create a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:34:46 localhost podman[229018]: 2025-12-02 09:34:46.389157578 +0000 UTC m=+0.046627970 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 2 04:34:46 localhost python3[228952]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Dec 2 04:34:47 localhost python3.9[229165]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:47 localhost python3.9[229277]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:48 localhost python3.9[229386]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668088.0457976-4085-208589644972197/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:34:49 localhost python3.9[229441]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:34:49 localhost systemd[1]: Reloading. Dec 2 04:34:49 localhost systemd-rc-local-generator[229469]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:34:49 localhost systemd-sysv-generator[229472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43787 DF PROTO=TCP SPT=37720 DPT=9105 SEQ=501411175 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799B9E50000000001030307) Dec 2 04:34:50 localhost python3.9[229532]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:34:50 localhost systemd[1]: Reloading. Dec 2 04:34:50 localhost systemd-rc-local-generator[229555]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:34:50 localhost systemd-sysv-generator[229560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:34:50 localhost systemd[1]: Starting nova_compute container... Dec 2 04:34:50 localhost systemd[1]: Started libcrun container. Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:34:50 localhost podman[229573]: 2025-12-02 09:34:50.623970407 +0000 UTC m=+0.110036547 container init a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:34:50 localhost podman[229573]: 2025-12-02 09:34:50.63070845 +0000 UTC m=+0.116774590 container start a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true) Dec 2 04:34:50 localhost podman[229573]: nova_compute Dec 2 04:34:50 localhost nova_compute[229589]: + sudo -E kolla_set_configs Dec 2 04:34:50 localhost systemd[1]: Started nova_compute container. Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Validating config file Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying service configuration files Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Deleting /etc/ceph Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Writing out command to execute Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:34:50 localhost nova_compute[229589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:34:50 localhost nova_compute[229589]: ++ cat /run_command Dec 2 04:34:50 localhost nova_compute[229589]: + CMD=nova-compute Dec 2 04:34:50 localhost nova_compute[229589]: + ARGS= Dec 2 04:34:50 localhost nova_compute[229589]: + sudo kolla_copy_cacerts Dec 2 04:34:50 localhost nova_compute[229589]: + [[ ! -n '' ]] Dec 2 04:34:50 localhost nova_compute[229589]: + . kolla_extend_start Dec 2 04:34:50 localhost nova_compute[229589]: Running command: 'nova-compute' Dec 2 04:34:50 localhost nova_compute[229589]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:34:50 localhost nova_compute[229589]: + umask 0022 Dec 2 04:34:50 localhost nova_compute[229589]: + exec nova-compute Dec 2 04:34:51 localhost python3.9[229709]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8099 DF PROTO=TCP SPT=46342 DPT=9101 SEQ=3209027010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799C4C00000000001030307) Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.484 229593 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.485 229593 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.485 229593 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.485 229593 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.619 229593 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.641 229593 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:34:52 localhost nova_compute[229589]: 2025-12-02 09:34:52.641 229593 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.074 229593 INFO nova.virt.driver [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:34:53 localhost python3.9[229821]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.195 229593 INFO nova.compute.provider_config [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.207 229593 WARNING nova.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.207 229593 DEBUG oslo_concurrency.lockutils [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.208 229593 DEBUG oslo_concurrency.lockutils [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.208 229593 DEBUG oslo_concurrency.lockutils [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.208 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.208 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.208 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.209 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.210 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] console_host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.211 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.212 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.213 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.214 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.215 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.216 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.217 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.218 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.219 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.220 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.221 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.222 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.223 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.224 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.225 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.226 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.227 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.228 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.229 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.230 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.231 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.232 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.233 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.234 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.235 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.236 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.237 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.238 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.239 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.240 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.241 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.242 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.243 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.244 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.245 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.246 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.247 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.248 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.249 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.250 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.251 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.252 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.253 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.254 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.255 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.256 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.257 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.258 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.259 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.260 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.261 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.262 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.263 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.264 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.265 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.266 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.267 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.268 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.269 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.270 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.271 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.272 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.273 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.274 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.275 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.276 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.277 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.278 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.279 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.280 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.281 229593 WARNING oslo_config.cfg [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:34:53 localhost nova_compute[229589]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:34:53 localhost nova_compute[229589]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:34:53 localhost nova_compute[229589]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:34:53 localhost nova_compute[229589]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.281 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.281 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.281 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.281 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.282 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.283 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.284 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.285 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.286 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.287 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.288 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.289 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.290 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.291 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.292 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.293 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.294 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.295 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.295 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.295 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.295 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.295 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.296 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.297 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.298 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.299 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.300 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.301 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.302 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.303 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.304 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.305 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.306 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.307 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.308 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.309 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.310 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.311 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.312 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.313 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.314 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.315 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.316 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.317 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.318 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.319 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.320 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.321 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.322 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.323 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.324 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.325 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.326 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.327 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.328 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.329 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.330 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.331 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.332 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.333 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.334 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.335 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.336 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.337 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.338 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.339 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.340 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.341 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.342 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.343 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.344 229593 DEBUG oslo_service.service [None req-922028c2-4292-4952-b0e3-150ee468d167 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.345 229593 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.371 229593 INFO nova.virt.node [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.371 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.372 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.372 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.373 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.384 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.387 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.388 229593 INFO nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.403 229593 DEBUG nova.virt.libvirt.volume.mount [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.405 229593 INFO nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Libvirt host capabilities Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: f041467c-26d0-44b9-832e-8db5f9b7a49d Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: x86_64 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v4 Dec 2 04:34:53 localhost nova_compute[229589]: AMD Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tcp Dec 2 04:34:53 localhost nova_compute[229589]: rdma Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 16116612 Dec 2 04:34:53 localhost nova_compute[229589]: 4029153 Dec 2 04:34:53 localhost nova_compute[229589]: 0 Dec 2 04:34:53 localhost nova_compute[229589]: 0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: selinux Dec 2 04:34:53 localhost nova_compute[229589]: 0 Dec 2 04:34:53 localhost nova_compute[229589]: system_u:system_r:svirt_t:s0 Dec 2 04:34:53 localhost nova_compute[229589]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: dac Dec 2 04:34:53 localhost nova_compute[229589]: 0 Dec 2 04:34:53 localhost nova_compute[229589]: +107:+107 Dec 2 04:34:53 localhost nova_compute[229589]: +107:+107 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: hvm Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 32 Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-i440fx-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.8.0 Dec 2 04:34:53 localhost nova_compute[229589]: q35 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.4.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.5.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.3.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.4.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.2.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.2.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.0.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.0.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.1.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: hvm Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 64 Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-i440fx-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.8.0 Dec 2 04:34:53 localhost nova_compute[229589]: q35 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.4.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.5.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.3.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.4.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.2.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.2.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.0.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.0.0 Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel8.1.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: #033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.416 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.433 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.8.0 Dec 2 04:34:53 localhost nova_compute[229589]: i686 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: rom Dec 2 04:34:53 localhost nova_compute[229589]: pflash Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: yes Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: AMD Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 486 Dec 2 04:34:53 localhost nova_compute[229589]: 486-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Conroe Dec 2 04:34:53 localhost nova_compute[229589]: Conroe-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-IBPB Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v4 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v1 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v2 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v6 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v7 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Penryn Dec 2 04:34:53 localhost nova_compute[229589]: Penryn-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Westmere Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v2 Dec 2 04:34:53 localhost nova_compute[229589]: athlon Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: athlon-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: kvm32 Dec 2 04:34:53 localhost nova_compute[229589]: kvm32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: n270 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: n270-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pentium Dec 2 04:34:53 localhost nova_compute[229589]: pentium-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: phenom Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: phenom-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu32 Dec 2 04:34:53 localhost nova_compute[229589]: qemu32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: anonymous Dec 2 04:34:53 localhost nova_compute[229589]: memfd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: disk Dec 2 04:34:53 localhost nova_compute[229589]: cdrom Dec 2 04:34:53 localhost nova_compute[229589]: floppy Dec 2 04:34:53 localhost nova_compute[229589]: lun Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: fdc Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: sata Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: vnc Dec 2 04:34:53 localhost nova_compute[229589]: egl-headless Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: subsystem Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: mandatory Dec 2 04:34:53 localhost nova_compute[229589]: requisite Dec 2 04:34:53 localhost nova_compute[229589]: optional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: pci Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: random Dec 2 04:34:53 localhost nova_compute[229589]: egd Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: path Dec 2 04:34:53 localhost nova_compute[229589]: handle Dec 2 04:34:53 localhost nova_compute[229589]: virtiofs Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tpm-tis Dec 2 04:34:53 localhost nova_compute[229589]: tpm-crb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: emulator Dec 2 04:34:53 localhost nova_compute[229589]: external Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 2.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: passt Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: isa Dec 2 04:34:53 localhost nova_compute[229589]: hyperv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: null Dec 2 04:34:53 localhost nova_compute[229589]: vc Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: dev Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: pipe Dec 2 04:34:53 localhost nova_compute[229589]: stdio Dec 2 04:34:53 localhost nova_compute[229589]: udp Dec 2 04:34:53 localhost nova_compute[229589]: tcp Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: qemu-vdagent Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: relaxed Dec 2 04:34:53 localhost nova_compute[229589]: vapic Dec 2 04:34:53 localhost nova_compute[229589]: spinlocks Dec 2 04:34:53 localhost nova_compute[229589]: vpindex Dec 2 04:34:53 localhost nova_compute[229589]: runtime Dec 2 04:34:53 localhost nova_compute[229589]: synic Dec 2 04:34:53 localhost nova_compute[229589]: stimer Dec 2 04:34:53 localhost nova_compute[229589]: reset Dec 2 04:34:53 localhost nova_compute[229589]: vendor_id Dec 2 04:34:53 localhost nova_compute[229589]: frequencies Dec 2 04:34:53 localhost nova_compute[229589]: reenlightenment Dec 2 04:34:53 localhost nova_compute[229589]: tlbflush Dec 2 04:34:53 localhost nova_compute[229589]: ipi Dec 2 04:34:53 localhost nova_compute[229589]: avic Dec 2 04:34:53 localhost nova_compute[229589]: emsr_bitmap Dec 2 04:34:53 localhost nova_compute[229589]: xmm_input Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 4095 Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Linux KVM Hv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tdx Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.443 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-i440fx-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: i686 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: rom Dec 2 04:34:53 localhost nova_compute[229589]: pflash Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: yes Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: AMD Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 486 Dec 2 04:34:53 localhost nova_compute[229589]: 486-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Conroe Dec 2 04:34:53 localhost nova_compute[229589]: Conroe-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-IBPB Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v4 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v1 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v2 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v6 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v7 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Penryn Dec 2 04:34:53 localhost nova_compute[229589]: Penryn-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Westmere Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v2 Dec 2 04:34:53 localhost nova_compute[229589]: athlon Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: athlon-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: kvm32 Dec 2 04:34:53 localhost nova_compute[229589]: kvm32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: n270 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: n270-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pentium Dec 2 04:34:53 localhost nova_compute[229589]: pentium-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: phenom Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: phenom-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu32 Dec 2 04:34:53 localhost nova_compute[229589]: qemu32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: anonymous Dec 2 04:34:53 localhost nova_compute[229589]: memfd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: disk Dec 2 04:34:53 localhost nova_compute[229589]: cdrom Dec 2 04:34:53 localhost nova_compute[229589]: floppy Dec 2 04:34:53 localhost nova_compute[229589]: lun Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: ide Dec 2 04:34:53 localhost nova_compute[229589]: fdc Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: sata Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: vnc Dec 2 04:34:53 localhost nova_compute[229589]: egl-headless Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: subsystem Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: mandatory Dec 2 04:34:53 localhost nova_compute[229589]: requisite Dec 2 04:34:53 localhost nova_compute[229589]: optional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: pci Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: random Dec 2 04:34:53 localhost nova_compute[229589]: egd Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: path Dec 2 04:34:53 localhost nova_compute[229589]: handle Dec 2 04:34:53 localhost nova_compute[229589]: virtiofs Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tpm-tis Dec 2 04:34:53 localhost nova_compute[229589]: tpm-crb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: emulator Dec 2 04:34:53 localhost nova_compute[229589]: external Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 2.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: passt Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: isa Dec 2 04:34:53 localhost nova_compute[229589]: hyperv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: null Dec 2 04:34:53 localhost nova_compute[229589]: vc Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: dev Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: pipe Dec 2 04:34:53 localhost nova_compute[229589]: stdio Dec 2 04:34:53 localhost nova_compute[229589]: udp Dec 2 04:34:53 localhost nova_compute[229589]: tcp Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: qemu-vdagent Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: relaxed Dec 2 04:34:53 localhost nova_compute[229589]: vapic Dec 2 04:34:53 localhost nova_compute[229589]: spinlocks Dec 2 04:34:53 localhost nova_compute[229589]: vpindex Dec 2 04:34:53 localhost nova_compute[229589]: runtime Dec 2 04:34:53 localhost nova_compute[229589]: synic Dec 2 04:34:53 localhost nova_compute[229589]: stimer Dec 2 04:34:53 localhost nova_compute[229589]: reset Dec 2 04:34:53 localhost nova_compute[229589]: vendor_id Dec 2 04:34:53 localhost nova_compute[229589]: frequencies Dec 2 04:34:53 localhost nova_compute[229589]: reenlightenment Dec 2 04:34:53 localhost nova_compute[229589]: tlbflush Dec 2 04:34:53 localhost nova_compute[229589]: ipi Dec 2 04:34:53 localhost nova_compute[229589]: avic Dec 2 04:34:53 localhost nova_compute[229589]: emsr_bitmap Dec 2 04:34:53 localhost nova_compute[229589]: xmm_input Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 4095 Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Linux KVM Hv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tdx Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.470 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.475 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-q35-rhel9.8.0 Dec 2 04:34:53 localhost nova_compute[229589]: x86_64 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: efi Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: rom Dec 2 04:34:53 localhost nova_compute[229589]: pflash Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: yes Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: yes Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: AMD Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 486 Dec 2 04:34:53 localhost nova_compute[229589]: 486-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Conroe Dec 2 04:34:53 localhost nova_compute[229589]: Conroe-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-IBPB Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v4 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v1 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v2 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v6 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v7 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Penryn Dec 2 04:34:53 localhost nova_compute[229589]: Penryn-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Westmere Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v2 Dec 2 04:34:53 localhost nova_compute[229589]: athlon Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: athlon-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: kvm32 Dec 2 04:34:53 localhost nova_compute[229589]: kvm32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: n270 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: n270-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pentium Dec 2 04:34:53 localhost nova_compute[229589]: pentium-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: phenom Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: phenom-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu32 Dec 2 04:34:53 localhost nova_compute[229589]: qemu32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: anonymous Dec 2 04:34:53 localhost nova_compute[229589]: memfd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: disk Dec 2 04:34:53 localhost nova_compute[229589]: cdrom Dec 2 04:34:53 localhost nova_compute[229589]: floppy Dec 2 04:34:53 localhost nova_compute[229589]: lun Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: fdc Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: sata Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: vnc Dec 2 04:34:53 localhost nova_compute[229589]: egl-headless Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: subsystem Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: mandatory Dec 2 04:34:53 localhost nova_compute[229589]: requisite Dec 2 04:34:53 localhost nova_compute[229589]: optional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: pci Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: random Dec 2 04:34:53 localhost nova_compute[229589]: egd Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: path Dec 2 04:34:53 localhost nova_compute[229589]: handle Dec 2 04:34:53 localhost nova_compute[229589]: virtiofs Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tpm-tis Dec 2 04:34:53 localhost nova_compute[229589]: tpm-crb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: emulator Dec 2 04:34:53 localhost nova_compute[229589]: external Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 2.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: passt Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: isa Dec 2 04:34:53 localhost nova_compute[229589]: hyperv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: null Dec 2 04:34:53 localhost nova_compute[229589]: vc Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: dev Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: pipe Dec 2 04:34:53 localhost nova_compute[229589]: stdio Dec 2 04:34:53 localhost nova_compute[229589]: udp Dec 2 04:34:53 localhost nova_compute[229589]: tcp Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: qemu-vdagent Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: relaxed Dec 2 04:34:53 localhost nova_compute[229589]: vapic Dec 2 04:34:53 localhost nova_compute[229589]: spinlocks Dec 2 04:34:53 localhost nova_compute[229589]: vpindex Dec 2 04:34:53 localhost nova_compute[229589]: runtime Dec 2 04:34:53 localhost nova_compute[229589]: synic Dec 2 04:34:53 localhost nova_compute[229589]: stimer Dec 2 04:34:53 localhost nova_compute[229589]: reset Dec 2 04:34:53 localhost nova_compute[229589]: vendor_id Dec 2 04:34:53 localhost nova_compute[229589]: frequencies Dec 2 04:34:53 localhost nova_compute[229589]: reenlightenment Dec 2 04:34:53 localhost nova_compute[229589]: tlbflush Dec 2 04:34:53 localhost nova_compute[229589]: ipi Dec 2 04:34:53 localhost nova_compute[229589]: avic Dec 2 04:34:53 localhost nova_compute[229589]: emsr_bitmap Dec 2 04:34:53 localhost nova_compute[229589]: xmm_input Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 4095 Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Linux KVM Hv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tdx Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.532 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/libexec/qemu-kvm Dec 2 04:34:53 localhost nova_compute[229589]: kvm Dec 2 04:34:53 localhost nova_compute[229589]: pc-i440fx-rhel7.6.0 Dec 2 04:34:53 localhost nova_compute[229589]: x86_64 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: rom Dec 2 04:34:53 localhost nova_compute[229589]: pflash Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: yes Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: no Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: AMD Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 486 Dec 2 04:34:53 localhost nova_compute[229589]: 486-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Broadwell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cascadelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Conroe Dec 2 04:34:53 localhost nova_compute[229589]: Conroe-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Cooperlake-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Denverton-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dhyana-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Genoa-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-IBPB Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Milan-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-Rome-v4 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v1 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v2 Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: EPYC-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: GraniteRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Haswell-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-noTSX Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v6 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Icelake-Server-v7 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: IvyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: KnightsMill-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Nehalem-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G1-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G4-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Opteron_G5-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Penryn Dec 2 04:34:53 localhost nova_compute[229589]: Penryn-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: SandyBridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SapphireRapids-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: SierraForest-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Client-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-noTSX-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Skylake-Server-v5 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v2 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v3 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Snowridge-v4 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Westmere Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-IBRS Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Westmere-v2 Dec 2 04:34:53 localhost nova_compute[229589]: athlon Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: athlon-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: core2duo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: coreduo-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: kvm32 Dec 2 04:34:53 localhost nova_compute[229589]: kvm32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64 Dec 2 04:34:53 localhost nova_compute[229589]: kvm64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: n270 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: n270-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pentium Dec 2 04:34:53 localhost nova_compute[229589]: pentium-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2 Dec 2 04:34:53 localhost nova_compute[229589]: pentium2-v1 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3 Dec 2 04:34:53 localhost nova_compute[229589]: pentium3-v1 Dec 2 04:34:53 localhost nova_compute[229589]: phenom Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: phenom-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu32 Dec 2 04:34:53 localhost nova_compute[229589]: qemu32-v1 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64 Dec 2 04:34:53 localhost nova_compute[229589]: qemu64-v1 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: anonymous Dec 2 04:34:53 localhost nova_compute[229589]: memfd Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: disk Dec 2 04:34:53 localhost nova_compute[229589]: cdrom Dec 2 04:34:53 localhost nova_compute[229589]: floppy Dec 2 04:34:53 localhost nova_compute[229589]: lun Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: ide Dec 2 04:34:53 localhost nova_compute[229589]: fdc Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: sata Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: vnc Dec 2 04:34:53 localhost nova_compute[229589]: egl-headless Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: subsystem Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: mandatory Dec 2 04:34:53 localhost nova_compute[229589]: requisite Dec 2 04:34:53 localhost nova_compute[229589]: optional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: pci Dec 2 04:34:53 localhost nova_compute[229589]: scsi Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: virtio Dec 2 04:34:53 localhost nova_compute[229589]: virtio-transitional Dec 2 04:34:53 localhost nova_compute[229589]: virtio-non-transitional Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: random Dec 2 04:34:53 localhost nova_compute[229589]: egd Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: path Dec 2 04:34:53 localhost nova_compute[229589]: handle Dec 2 04:34:53 localhost nova_compute[229589]: virtiofs Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tpm-tis Dec 2 04:34:53 localhost nova_compute[229589]: tpm-crb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: emulator Dec 2 04:34:53 localhost nova_compute[229589]: external Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 2.0 Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: usb Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: qemu Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: builtin Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: default Dec 2 04:34:53 localhost nova_compute[229589]: passt Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: isa Dec 2 04:34:53 localhost nova_compute[229589]: hyperv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: null Dec 2 04:34:53 localhost nova_compute[229589]: vc Dec 2 04:34:53 localhost nova_compute[229589]: pty Dec 2 04:34:53 localhost nova_compute[229589]: dev Dec 2 04:34:53 localhost nova_compute[229589]: file Dec 2 04:34:53 localhost nova_compute[229589]: pipe Dec 2 04:34:53 localhost nova_compute[229589]: stdio Dec 2 04:34:53 localhost nova_compute[229589]: udp Dec 2 04:34:53 localhost nova_compute[229589]: tcp Dec 2 04:34:53 localhost nova_compute[229589]: unix Dec 2 04:34:53 localhost nova_compute[229589]: qemu-vdagent Dec 2 04:34:53 localhost nova_compute[229589]: dbus Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: relaxed Dec 2 04:34:53 localhost nova_compute[229589]: vapic Dec 2 04:34:53 localhost nova_compute[229589]: spinlocks Dec 2 04:34:53 localhost nova_compute[229589]: vpindex Dec 2 04:34:53 localhost nova_compute[229589]: runtime Dec 2 04:34:53 localhost nova_compute[229589]: synic Dec 2 04:34:53 localhost nova_compute[229589]: stimer Dec 2 04:34:53 localhost nova_compute[229589]: reset Dec 2 04:34:53 localhost nova_compute[229589]: vendor_id Dec 2 04:34:53 localhost nova_compute[229589]: frequencies Dec 2 04:34:53 localhost nova_compute[229589]: reenlightenment Dec 2 04:34:53 localhost nova_compute[229589]: tlbflush Dec 2 04:34:53 localhost nova_compute[229589]: ipi Dec 2 04:34:53 localhost nova_compute[229589]: avic Dec 2 04:34:53 localhost nova_compute[229589]: emsr_bitmap Dec 2 04:34:53 localhost nova_compute[229589]: xmm_input Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: 4095 Dec 2 04:34:53 localhost nova_compute[229589]: on Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: off Dec 2 04:34:53 localhost nova_compute[229589]: Linux KVM Hv Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: tdx Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: Dec 2 04:34:53 localhost nova_compute[229589]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.596 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.596 229593 INFO nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Secure Boot support detected#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.599 229593 INFO nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.599 229593 INFO nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.612 229593 DEBUG nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.647 229593 INFO nova.virt.node [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.666 229593 DEBUG nova.compute.manager [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Verified node c79215b2-6762-4f7f-a322-f44db2b0b9bd matches my host np0005541913.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.711 229593 DEBUG nova.compute.manager [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.715 229593 DEBUG nova.virt.libvirt.vif [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005541913.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-02T08:31:55Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.715 229593 DEBUG nova.network.os_vif_util [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.715 229593 DEBUG nova.network.os_vif_util [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.716 229593 DEBUG os_vif [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.740 229593 DEBUG ovsdbapp.backend.ovs_idl [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.740 229593 DEBUG ovsdbapp.backend.ovs_idl [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.740 229593 DEBUG ovsdbapp.backend.ovs_idl [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.740 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.741 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.741 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.741 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.742 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.744 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.755 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.755 229593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.756 229593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:34:53 localhost nova_compute[229589]: 2025-12-02 09:34:53.756 229593 INFO oslo.privsep.daemon [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpokq8zc9r/privsep.sock']#033[00m Dec 2 04:34:53 localhost python3.9[229954]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.309 229593 INFO oslo.privsep.daemon [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.214 229976 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.220 229976 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.225 229976 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.225 229976 INFO oslo.privsep.daemon [-] privsep daemon running as pid 229976#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.605 229593 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.605 229593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a318f6a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.606 229593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a318f6a-b3, col_values=(('external_ids', {'iface-id': '4a318f6a-b3c1-4690-8246-f7d046ccd64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:b2:03', 'vm-uuid': 'b254bb7f-2891-4b37-9c44-9700e301ce16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.607 229593 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.608 229593 INFO os_vif [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3')#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.608 229593 DEBUG nova.compute.manager [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.612 229593 DEBUG nova.compute.manager [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 2 04:34:54 localhost nova_compute[229589]: 2025-12-02 09:34:54.613 229593 INFO nova.compute.manager [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.069 229593 INFO nova.service [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating service version for nova-compute on np0005541913.localdomain from 57 to 66#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.102 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.103 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.103 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.104 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.105 229593 DEBUG oslo_concurrency.processutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:55 localhost python3.9[230072]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:34:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8101 DF PROTO=TCP SPT=46342 DPT=9101 SEQ=3209027010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799D0E50000000001030307) Dec 2 04:34:55 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 122.2 (407 of 333 items), suggesting rotation. Dec 2 04:34:55 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:34:55 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:34:55 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.572 229593 DEBUG oslo_concurrency.processutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.664 229593 DEBUG nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:34:55 localhost nova_compute[229589]: 2025-12-02 09:34:55.664 229593 DEBUG nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:34:55 localhost systemd[1]: Started libvirt nodedev daemon. Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.022 229593 WARNING nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.023 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12949MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.023 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.024 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.211 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.212 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.212 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.229 229593 DEBUG nova.scheduler.client.report [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.248 229593 DEBUG nova.scheduler.client.report [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.248 229593 DEBUG nova.compute.provider_tree [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.264 229593 DEBUG nova.scheduler.client.report [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.290 229593 DEBUG nova.scheduler.client.report [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_ABM,HW_CPU_X86_SHA,HW_CPU_X86_SSE42,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NODE,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.342 229593 DEBUG oslo_concurrency.processutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:34:56 localhost python3.9[230308]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.774 229593 DEBUG oslo_concurrency.processutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.779 229593 DEBUG nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 2 04:34:56 localhost nova_compute[229589]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.779 229593 INFO nova.virt.libvirt.host [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.780 229593 DEBUG nova.compute.provider_tree [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.780 229593 DEBUG nova.virt.libvirt.driver [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.823 229593 DEBUG nova.scheduler.client.report [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updated inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.824 229593 DEBUG nova.compute.provider_tree [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.824 229593 DEBUG nova.compute.provider_tree [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.894 229593 DEBUG nova.compute.provider_tree [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Updating resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd generation from 4 to 5 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.914 229593 DEBUG nova.compute.resource_tracker [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.915 229593 DEBUG oslo_concurrency.lockutils [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.915 229593 DEBUG nova.service [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.974 229593 DEBUG nova.service [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 2 04:34:56 localhost nova_compute[229589]: 2025-12-02 09:34:56.975 229593 DEBUG nova.servicegroup.drivers.db [None req-6701026c-4a93-489b-bff1-de903c44d2f8 - - - - - -] DB_Driver: join new ServiceGroup member np0005541913.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 2 04:34:57 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:34:57 localhost journal[203664]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 2 04:34:57 localhost systemd[1]: libpod-a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f.scope: Deactivated successfully. Dec 2 04:34:57 localhost journal[203664]: hostname: np0005541913.localdomain Dec 2 04:34:57 localhost journal[203664]: End of file while reading data: Input/output error Dec 2 04:34:57 localhost systemd[1]: libpod-a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f.scope: Consumed 4.487s CPU time. Dec 2 04:34:57 localhost podman[230334]: 2025-12-02 09:34:57.497646303 +0000 UTC m=+0.080513993 container died a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f-userdata-shm.mount: Deactivated successfully. Dec 2 04:34:58 localhost systemd[1]: var-lib-containers-storage-overlay-599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0-merged.mount: Deactivated successfully. Dec 2 04:34:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8102 DF PROTO=TCP SPT=46342 DPT=9101 SEQ=3209027010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799E0A50000000001030307) Dec 2 04:35:02 localhost podman[230334]: 2025-12-02 09:35:02.492121909 +0000 UTC m=+5.074989539 container cleanup a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:35:02 localhost podman[230334]: nova_compute Dec 2 04:35:02 localhost podman[230622]: error opening file `/run/crun/a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f/status`: No such file or directory Dec 2 04:35:02 localhost podman[230611]: 2025-12-02 09:35:02.576138335 +0000 UTC m=+0.052381467 container cleanup a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 04:35:02 localhost podman[230611]: nova_compute Dec 2 04:35:02 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 2 04:35:02 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:35:02 localhost systemd[1]: Starting nova_compute container... Dec 2 04:35:02 localhost systemd[1]: Started libcrun container. Dec 2 04:35:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:02 localhost podman[230624]: 2025-12-02 09:35:02.711555401 +0000 UTC m=+0.103900359 container init a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:35:02 localhost podman[230624]: 2025-12-02 09:35:02.721880132 +0000 UTC m=+0.114225070 container start a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:35:02 localhost podman[230624]: nova_compute Dec 2 04:35:02 localhost nova_compute[230637]: + sudo -E kolla_set_configs Dec 2 04:35:02 localhost systemd[1]: Started nova_compute container. Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Validating config file Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying service configuration files Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /etc/ceph Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Writing out command to execute Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:02 localhost nova_compute[230637]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:35:02 localhost nova_compute[230637]: ++ cat /run_command Dec 2 04:35:02 localhost nova_compute[230637]: + CMD=nova-compute Dec 2 04:35:02 localhost nova_compute[230637]: + ARGS= Dec 2 04:35:02 localhost nova_compute[230637]: + sudo kolla_copy_cacerts Dec 2 04:35:02 localhost nova_compute[230637]: + [[ ! -n '' ]] Dec 2 04:35:02 localhost nova_compute[230637]: + . kolla_extend_start Dec 2 04:35:02 localhost nova_compute[230637]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:35:02 localhost nova_compute[230637]: Running command: 'nova-compute' Dec 2 04:35:02 localhost nova_compute[230637]: + umask 0022 Dec 2 04:35:02 localhost nova_compute[230637]: + exec nova-compute Dec 2 04:35:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:35:03.016 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:35:03.017 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:35:03.018 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:03 localhost python3.9[230759]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:35:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6762 DF PROTO=TCP SPT=57932 DPT=9102 SEQ=3257595781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799F27E0000000001030307) Dec 2 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55579 DF PROTO=TCP SPT=35406 DPT=9105 SEQ=372350077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799F2FE0000000001030307) Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.564 230641 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.565 230641 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.565 230641 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.565 230641 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.691 230641 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.713 230641 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:04 localhost nova_compute[230637]: 2025-12-02 09:35:04.713 230641 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.082 230641 INFO nova.virt.driver [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:35:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:35:05 localhost systemd[1]: Started libpod-conmon-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope. Dec 2 04:35:05 localhost systemd[1]: Started libcrun container. Dec 2 04:35:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.213 230641 INFO nova.compute.provider_config [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.221 230641 WARNING nova.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.221 230641 DEBUG oslo_concurrency.lockutils [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.221 230641 DEBUG oslo_concurrency.lockutils [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.222 230641 DEBUG oslo_concurrency.lockutils [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.222 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.222 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.222 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.223 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost podman[230798]: 2025-12-02 09:35:05.223765381 +0000 UTC m=+0.079177125 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.224 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] console_host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.225 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.226 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.227 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.228 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.229 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.230 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost podman[230798]: 2025-12-02 09:35:05.231535093 +0000 UTC m=+0.086946807 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.231 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.232 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.233 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.234 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.235 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.236 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.237 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.238 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.239 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.239 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.239 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.239 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.239 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.240 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.241 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.242 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.243 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.244 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.244 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.244 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.244 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.244 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.245 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.245 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.245 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.245 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost podman[230785]: 2025-12-02 09:35:05.245588706 +0000 UTC m=+0.176622019 container init ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute_init) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.246 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.247 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.248 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.249 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.250 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.251 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.252 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.253 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost podman[230785]: 2025-12-02 09:35:05.254544819 +0000 UTC m=+0.185578132 container start ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.254 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.255 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.256 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost python3.9[230759]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.257 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.258 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.259 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.259 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.259 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.260 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.260 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.260 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.261 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.261 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.261 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.262 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.262 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.262 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.263 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.263 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.263 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.264 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.264 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.264 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.264 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.265 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.265 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.265 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.266 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.266 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.266 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.267 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.267 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.267 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.267 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.268 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.268 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.268 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.269 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.269 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.269 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.270 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.270 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.270 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.270 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.271 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.271 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.271 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.272 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.272 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.272 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.273 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.273 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.273 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.273 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.274 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.274 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.274 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.275 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.275 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.275 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.275 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.276 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.276 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.276 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.277 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.277 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.277 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.278 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.278 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.278 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.279 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.279 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.279 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.279 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.280 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.280 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.280 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.281 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.281 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.281 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.281 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.282 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.282 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.282 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.283 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.283 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.283 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.284 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.284 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.284 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.284 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.285 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.285 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.285 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.286 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.287 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.287 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.287 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.287 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.287 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.288 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.288 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.288 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.288 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.289 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.289 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.289 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.289 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.289 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.290 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.290 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.290 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.290 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.290 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.291 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.291 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.291 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.291 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.292 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.292 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.292 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.292 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.292 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.293 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.293 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.293 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.293 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.294 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.295 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.295 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.295 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.295 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.295 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.296 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.296 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.296 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.296 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.296 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.297 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.297 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.297 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.297 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.297 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.298 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.298 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.298 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.298 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.299 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.299 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.299 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.299 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.299 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.300 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.300 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.300 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.300 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.300 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.301 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.301 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.301 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.301 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.302 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.302 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.302 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.302 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.302 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.303 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.303 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.303 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.303 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.304 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.304 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.304 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.304 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.304 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.305 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.305 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.305 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.305 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.305 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.306 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.306 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.306 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.306 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.306 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.307 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.307 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.307 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.307 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.307 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.308 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.308 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.308 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.308 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.308 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.309 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.309 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.309 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.309 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.309 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.310 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.310 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.310 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.310 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.310 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.311 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.311 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.311 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.311 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.311 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.312 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.312 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.312 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.312 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.313 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.314 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.314 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.314 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.314 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.315 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Applying nova statedir ownership Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.315 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.315 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.316 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16/ Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16 already 42436:42436 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.316 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16 to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16/console.log Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.316 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/4ee0f3f792b433d78f415a6f600ca9c7d9f0adb3 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.316 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.317 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-4ee0f3f792b433d78f415a6f600ca9c7d9f0adb3 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.317 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.317 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.317 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.317 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.318 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.318 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 2 04:35:05 localhost nova_compute_init[230823]: INFO:nova_statedir:Nova statedir ownership complete Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.318 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.318 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.318 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.319 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.319 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.319 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.319 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.319 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.320 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.320 230641 WARNING oslo_config.cfg [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:35:05 localhost nova_compute[230637]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:35:05 localhost nova_compute[230637]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:35:05 localhost nova_compute[230637]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:35:05 localhost nova_compute[230637]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.320 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.320 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.321 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.321 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.321 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.321 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.322 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.322 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.322 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.322 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.322 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.323 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.323 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.323 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.323 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.324 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.324 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.324 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.324 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.324 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.325 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.325 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.325 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.325 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.325 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost systemd[1]: libpod-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope: Deactivated successfully. Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.326 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.328 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.328 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.328 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.329 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.330 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.330 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.330 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.330 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.331 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.332 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.333 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.333 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.333 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.333 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.333 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.334 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.335 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.336 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.337 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.338 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.339 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.340 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.341 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.342 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.343 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.344 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.345 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.346 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.347 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.348 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.349 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.350 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.351 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.352 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.353 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.354 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.355 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.356 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.357 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.358 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.359 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.360 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.361 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.362 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.363 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.364 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.365 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.366 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.367 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.368 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.369 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.370 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.371 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.372 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.373 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.374 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.375 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.376 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.377 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.378 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.379 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.380 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.381 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.382 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.383 230641 DEBUG oslo_service.service [None req-b5362df6-6f13-436d-974e-686253e69105 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.386 230641 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:35:05 localhost podman[230838]: 2025-12-02 09:35:05.415904422 +0000 UTC m=+0.068398634 container died ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.424 230641 INFO nova.virt.node [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.425 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.426 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.426 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.426 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.435 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.436 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.437 230641 INFO nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.444 230641 INFO nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Libvirt host capabilities Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: f041467c-26d0-44b9-832e-8db5f9b7a49d Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: x86_64 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v4 Dec 2 04:35:05 localhost nova_compute[230637]: AMD Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tcp Dec 2 04:35:05 localhost nova_compute[230637]: rdma Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 16116612 Dec 2 04:35:05 localhost nova_compute[230637]: 4029153 Dec 2 04:35:05 localhost nova_compute[230637]: 0 Dec 2 04:35:05 localhost nova_compute[230637]: 0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: selinux Dec 2 04:35:05 localhost nova_compute[230637]: 0 Dec 2 04:35:05 localhost nova_compute[230637]: system_u:system_r:svirt_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: dac Dec 2 04:35:05 localhost nova_compute[230637]: 0 Dec 2 04:35:05 localhost nova_compute[230637]: +107:+107 Dec 2 04:35:05 localhost nova_compute[230637]: +107:+107 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: hvm Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 32 Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-i440fx-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.8.0 Dec 2 04:35:05 localhost nova_compute[230637]: q35 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.4.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.5.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.3.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.4.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.2.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.2.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.0.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.0.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.1.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: hvm Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 64 Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-i440fx-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.8.0 Dec 2 04:35:05 localhost nova_compute[230637]: q35 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.4.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.5.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.3.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.4.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.2.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.2.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.0.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.0.0 Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel8.1.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: #033[00m Dec 2 04:35:05 localhost podman[230838]: 2025-12-02 09:35:05.450827232 +0000 UTC m=+0.103321354 container cleanup ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, container_name=nova_compute_init, managed_by=edpm_ansible) Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.453 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:35:05 localhost systemd[1]: libpod-conmon-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope: Deactivated successfully. Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.458 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.8.0 Dec 2 04:35:05 localhost nova_compute[230637]: i686 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: rom Dec 2 04:35:05 localhost nova_compute[230637]: pflash Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: yes Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: AMD Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 486 Dec 2 04:35:05 localhost nova_compute[230637]: 486-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Conroe Dec 2 04:35:05 localhost nova_compute[230637]: Conroe-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-IBPB Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v4 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v1 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v2 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v6 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v7 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Penryn Dec 2 04:35:05 localhost nova_compute[230637]: Penryn-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Westmere Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v2 Dec 2 04:35:05 localhost nova_compute[230637]: athlon Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: athlon-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: kvm32 Dec 2 04:35:05 localhost nova_compute[230637]: kvm32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: n270 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: n270-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pentium Dec 2 04:35:05 localhost nova_compute[230637]: pentium-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: phenom Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: phenom-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu32 Dec 2 04:35:05 localhost nova_compute[230637]: qemu32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: anonymous Dec 2 04:35:05 localhost nova_compute[230637]: memfd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: disk Dec 2 04:35:05 localhost nova_compute[230637]: cdrom Dec 2 04:35:05 localhost nova_compute[230637]: floppy Dec 2 04:35:05 localhost nova_compute[230637]: lun Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: fdc Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: sata Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: vnc Dec 2 04:35:05 localhost nova_compute[230637]: egl-headless Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: subsystem Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: mandatory Dec 2 04:35:05 localhost nova_compute[230637]: requisite Dec 2 04:35:05 localhost nova_compute[230637]: optional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: pci Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: random Dec 2 04:35:05 localhost nova_compute[230637]: egd Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: path Dec 2 04:35:05 localhost nova_compute[230637]: handle Dec 2 04:35:05 localhost nova_compute[230637]: virtiofs Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tpm-tis Dec 2 04:35:05 localhost nova_compute[230637]: tpm-crb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: emulator Dec 2 04:35:05 localhost nova_compute[230637]: external Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 2.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: passt Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: isa Dec 2 04:35:05 localhost nova_compute[230637]: hyperv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: null Dec 2 04:35:05 localhost nova_compute[230637]: vc Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: dev Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: pipe Dec 2 04:35:05 localhost nova_compute[230637]: stdio Dec 2 04:35:05 localhost nova_compute[230637]: udp Dec 2 04:35:05 localhost nova_compute[230637]: tcp Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: qemu-vdagent Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: relaxed Dec 2 04:35:05 localhost nova_compute[230637]: vapic Dec 2 04:35:05 localhost nova_compute[230637]: spinlocks Dec 2 04:35:05 localhost nova_compute[230637]: vpindex Dec 2 04:35:05 localhost nova_compute[230637]: runtime Dec 2 04:35:05 localhost nova_compute[230637]: synic Dec 2 04:35:05 localhost nova_compute[230637]: stimer Dec 2 04:35:05 localhost nova_compute[230637]: reset Dec 2 04:35:05 localhost nova_compute[230637]: vendor_id Dec 2 04:35:05 localhost nova_compute[230637]: frequencies Dec 2 04:35:05 localhost nova_compute[230637]: reenlightenment Dec 2 04:35:05 localhost nova_compute[230637]: tlbflush Dec 2 04:35:05 localhost nova_compute[230637]: ipi Dec 2 04:35:05 localhost nova_compute[230637]: avic Dec 2 04:35:05 localhost nova_compute[230637]: emsr_bitmap Dec 2 04:35:05 localhost nova_compute[230637]: xmm_input Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 4095 Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Linux KVM Hv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tdx Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.463 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-i440fx-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: i686 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: rom Dec 2 04:35:05 localhost nova_compute[230637]: pflash Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: yes Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: AMD Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 486 Dec 2 04:35:05 localhost nova_compute[230637]: 486-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Conroe Dec 2 04:35:05 localhost nova_compute[230637]: Conroe-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-IBPB Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v4 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v1 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v2 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v6 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v7 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Penryn Dec 2 04:35:05 localhost nova_compute[230637]: Penryn-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Westmere Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v2 Dec 2 04:35:05 localhost nova_compute[230637]: athlon Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: athlon-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: kvm32 Dec 2 04:35:05 localhost nova_compute[230637]: kvm32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: n270 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: n270-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pentium Dec 2 04:35:05 localhost nova_compute[230637]: pentium-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: phenom Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: phenom-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu32 Dec 2 04:35:05 localhost nova_compute[230637]: qemu32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: anonymous Dec 2 04:35:05 localhost nova_compute[230637]: memfd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: disk Dec 2 04:35:05 localhost nova_compute[230637]: cdrom Dec 2 04:35:05 localhost nova_compute[230637]: floppy Dec 2 04:35:05 localhost nova_compute[230637]: lun Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: ide Dec 2 04:35:05 localhost nova_compute[230637]: fdc Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: sata Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: vnc Dec 2 04:35:05 localhost nova_compute[230637]: egl-headless Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: subsystem Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: mandatory Dec 2 04:35:05 localhost nova_compute[230637]: requisite Dec 2 04:35:05 localhost nova_compute[230637]: optional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: pci Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: random Dec 2 04:35:05 localhost nova_compute[230637]: egd Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: path Dec 2 04:35:05 localhost nova_compute[230637]: handle Dec 2 04:35:05 localhost nova_compute[230637]: virtiofs Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tpm-tis Dec 2 04:35:05 localhost nova_compute[230637]: tpm-crb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: emulator Dec 2 04:35:05 localhost nova_compute[230637]: external Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 2.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: passt Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: isa Dec 2 04:35:05 localhost nova_compute[230637]: hyperv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: null Dec 2 04:35:05 localhost nova_compute[230637]: vc Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: dev Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: pipe Dec 2 04:35:05 localhost nova_compute[230637]: stdio Dec 2 04:35:05 localhost nova_compute[230637]: udp Dec 2 04:35:05 localhost nova_compute[230637]: tcp Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: qemu-vdagent Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: relaxed Dec 2 04:35:05 localhost nova_compute[230637]: vapic Dec 2 04:35:05 localhost nova_compute[230637]: spinlocks Dec 2 04:35:05 localhost nova_compute[230637]: vpindex Dec 2 04:35:05 localhost nova_compute[230637]: runtime Dec 2 04:35:05 localhost nova_compute[230637]: synic Dec 2 04:35:05 localhost nova_compute[230637]: stimer Dec 2 04:35:05 localhost nova_compute[230637]: reset Dec 2 04:35:05 localhost nova_compute[230637]: vendor_id Dec 2 04:35:05 localhost nova_compute[230637]: frequencies Dec 2 04:35:05 localhost nova_compute[230637]: reenlightenment Dec 2 04:35:05 localhost nova_compute[230637]: tlbflush Dec 2 04:35:05 localhost nova_compute[230637]: ipi Dec 2 04:35:05 localhost nova_compute[230637]: avic Dec 2 04:35:05 localhost nova_compute[230637]: emsr_bitmap Dec 2 04:35:05 localhost nova_compute[230637]: xmm_input Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 4095 Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Linux KVM Hv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tdx Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.502 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.504 230641 DEBUG nova.virt.libvirt.volume.mount [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.507 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-q35-rhel9.8.0 Dec 2 04:35:05 localhost nova_compute[230637]: x86_64 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: efi Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: rom Dec 2 04:35:05 localhost nova_compute[230637]: pflash Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: yes Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: yes Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: AMD Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 486 Dec 2 04:35:05 localhost nova_compute[230637]: 486-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Conroe Dec 2 04:35:05 localhost nova_compute[230637]: Conroe-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-IBPB Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v4 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v1 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v2 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v6 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v7 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Penryn Dec 2 04:35:05 localhost nova_compute[230637]: Penryn-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Westmere Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v2 Dec 2 04:35:05 localhost nova_compute[230637]: athlon Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: athlon-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: kvm32 Dec 2 04:35:05 localhost nova_compute[230637]: kvm32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: n270 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: n270-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pentium Dec 2 04:35:05 localhost nova_compute[230637]: pentium-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: phenom Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: phenom-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu32 Dec 2 04:35:05 localhost nova_compute[230637]: qemu32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: anonymous Dec 2 04:35:05 localhost nova_compute[230637]: memfd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: disk Dec 2 04:35:05 localhost nova_compute[230637]: cdrom Dec 2 04:35:05 localhost nova_compute[230637]: floppy Dec 2 04:35:05 localhost nova_compute[230637]: lun Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: fdc Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: sata Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: vnc Dec 2 04:35:05 localhost nova_compute[230637]: egl-headless Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: subsystem Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: mandatory Dec 2 04:35:05 localhost nova_compute[230637]: requisite Dec 2 04:35:05 localhost nova_compute[230637]: optional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: pci Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: random Dec 2 04:35:05 localhost nova_compute[230637]: egd Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: path Dec 2 04:35:05 localhost nova_compute[230637]: handle Dec 2 04:35:05 localhost nova_compute[230637]: virtiofs Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tpm-tis Dec 2 04:35:05 localhost nova_compute[230637]: tpm-crb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: emulator Dec 2 04:35:05 localhost nova_compute[230637]: external Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 2.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: passt Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: isa Dec 2 04:35:05 localhost nova_compute[230637]: hyperv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: null Dec 2 04:35:05 localhost nova_compute[230637]: vc Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: dev Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: pipe Dec 2 04:35:05 localhost nova_compute[230637]: stdio Dec 2 04:35:05 localhost nova_compute[230637]: udp Dec 2 04:35:05 localhost nova_compute[230637]: tcp Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: qemu-vdagent Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: relaxed Dec 2 04:35:05 localhost nova_compute[230637]: vapic Dec 2 04:35:05 localhost nova_compute[230637]: spinlocks Dec 2 04:35:05 localhost nova_compute[230637]: vpindex Dec 2 04:35:05 localhost nova_compute[230637]: runtime Dec 2 04:35:05 localhost nova_compute[230637]: synic Dec 2 04:35:05 localhost nova_compute[230637]: stimer Dec 2 04:35:05 localhost nova_compute[230637]: reset Dec 2 04:35:05 localhost nova_compute[230637]: vendor_id Dec 2 04:35:05 localhost nova_compute[230637]: frequencies Dec 2 04:35:05 localhost nova_compute[230637]: reenlightenment Dec 2 04:35:05 localhost nova_compute[230637]: tlbflush Dec 2 04:35:05 localhost nova_compute[230637]: ipi Dec 2 04:35:05 localhost nova_compute[230637]: avic Dec 2 04:35:05 localhost nova_compute[230637]: emsr_bitmap Dec 2 04:35:05 localhost nova_compute[230637]: xmm_input Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 4095 Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Linux KVM Hv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tdx Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.569 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/libexec/qemu-kvm Dec 2 04:35:05 localhost nova_compute[230637]: kvm Dec 2 04:35:05 localhost nova_compute[230637]: pc-i440fx-rhel7.6.0 Dec 2 04:35:05 localhost nova_compute[230637]: x86_64 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: rom Dec 2 04:35:05 localhost nova_compute[230637]: pflash Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: yes Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: no Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: AMD Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 486 Dec 2 04:35:05 localhost nova_compute[230637]: 486-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Broadwell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cascadelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Conroe Dec 2 04:35:05 localhost nova_compute[230637]: Conroe-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Cooperlake-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Denverton-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dhyana-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Genoa-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-IBPB Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Milan-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-Rome-v4 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v1 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v2 Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: EPYC-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: GraniteRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Haswell-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-noTSX Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v6 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Icelake-Server-v7 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: IvyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: KnightsMill-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Nehalem-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G1-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G4-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Opteron_G5-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Penryn Dec 2 04:35:05 localhost nova_compute[230637]: Penryn-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: SandyBridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SapphireRapids-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: SierraForest-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Client-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-noTSX-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Skylake-Server-v5 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v2 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v3 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Snowridge-v4 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Westmere Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-IBRS Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Westmere-v2 Dec 2 04:35:05 localhost nova_compute[230637]: athlon Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: athlon-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: core2duo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: coreduo-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: kvm32 Dec 2 04:35:05 localhost nova_compute[230637]: kvm32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64 Dec 2 04:35:05 localhost nova_compute[230637]: kvm64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: n270 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: n270-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pentium Dec 2 04:35:05 localhost nova_compute[230637]: pentium-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2 Dec 2 04:35:05 localhost nova_compute[230637]: pentium2-v1 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3 Dec 2 04:35:05 localhost nova_compute[230637]: pentium3-v1 Dec 2 04:35:05 localhost nova_compute[230637]: phenom Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: phenom-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu32 Dec 2 04:35:05 localhost nova_compute[230637]: qemu32-v1 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64 Dec 2 04:35:05 localhost nova_compute[230637]: qemu64-v1 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: anonymous Dec 2 04:35:05 localhost nova_compute[230637]: memfd Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: disk Dec 2 04:35:05 localhost nova_compute[230637]: cdrom Dec 2 04:35:05 localhost nova_compute[230637]: floppy Dec 2 04:35:05 localhost nova_compute[230637]: lun Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: ide Dec 2 04:35:05 localhost nova_compute[230637]: fdc Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: sata Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: vnc Dec 2 04:35:05 localhost nova_compute[230637]: egl-headless Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: subsystem Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: mandatory Dec 2 04:35:05 localhost nova_compute[230637]: requisite Dec 2 04:35:05 localhost nova_compute[230637]: optional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: pci Dec 2 04:35:05 localhost nova_compute[230637]: scsi Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: virtio Dec 2 04:35:05 localhost nova_compute[230637]: virtio-transitional Dec 2 04:35:05 localhost nova_compute[230637]: virtio-non-transitional Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: random Dec 2 04:35:05 localhost nova_compute[230637]: egd Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: path Dec 2 04:35:05 localhost nova_compute[230637]: handle Dec 2 04:35:05 localhost nova_compute[230637]: virtiofs Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tpm-tis Dec 2 04:35:05 localhost nova_compute[230637]: tpm-crb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: emulator Dec 2 04:35:05 localhost nova_compute[230637]: external Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 2.0 Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: usb Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: qemu Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: builtin Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: default Dec 2 04:35:05 localhost nova_compute[230637]: passt Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: isa Dec 2 04:35:05 localhost nova_compute[230637]: hyperv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: null Dec 2 04:35:05 localhost nova_compute[230637]: vc Dec 2 04:35:05 localhost nova_compute[230637]: pty Dec 2 04:35:05 localhost nova_compute[230637]: dev Dec 2 04:35:05 localhost nova_compute[230637]: file Dec 2 04:35:05 localhost nova_compute[230637]: pipe Dec 2 04:35:05 localhost nova_compute[230637]: stdio Dec 2 04:35:05 localhost nova_compute[230637]: udp Dec 2 04:35:05 localhost nova_compute[230637]: tcp Dec 2 04:35:05 localhost nova_compute[230637]: unix Dec 2 04:35:05 localhost nova_compute[230637]: qemu-vdagent Dec 2 04:35:05 localhost nova_compute[230637]: dbus Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: relaxed Dec 2 04:35:05 localhost nova_compute[230637]: vapic Dec 2 04:35:05 localhost nova_compute[230637]: spinlocks Dec 2 04:35:05 localhost nova_compute[230637]: vpindex Dec 2 04:35:05 localhost nova_compute[230637]: runtime Dec 2 04:35:05 localhost nova_compute[230637]: synic Dec 2 04:35:05 localhost nova_compute[230637]: stimer Dec 2 04:35:05 localhost nova_compute[230637]: reset Dec 2 04:35:05 localhost nova_compute[230637]: vendor_id Dec 2 04:35:05 localhost nova_compute[230637]: frequencies Dec 2 04:35:05 localhost nova_compute[230637]: reenlightenment Dec 2 04:35:05 localhost nova_compute[230637]: tlbflush Dec 2 04:35:05 localhost nova_compute[230637]: ipi Dec 2 04:35:05 localhost nova_compute[230637]: avic Dec 2 04:35:05 localhost nova_compute[230637]: emsr_bitmap Dec 2 04:35:05 localhost nova_compute[230637]: xmm_input Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: 4095 Dec 2 04:35:05 localhost nova_compute[230637]: on Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: off Dec 2 04:35:05 localhost nova_compute[230637]: Linux KVM Hv Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: tdx Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: Dec 2 04:35:05 localhost nova_compute[230637]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.635 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.635 230641 INFO nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Secure Boot support detected#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.638 230641 INFO nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.639 230641 INFO nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.657 230641 DEBUG nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.736 230641 INFO nova.virt.node [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.814 230641 DEBUG nova.compute.manager [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Verified node c79215b2-6762-4f7f-a322-f44db2b0b9bd matches my host np0005541913.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.878 230641 DEBUG nova.compute.manager [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.881 230641 DEBUG nova.virt.libvirt.vif [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005541913.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-02T08:31:55Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.882 230641 DEBUG nova.network.os_vif_util [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.882 230641 DEBUG nova.network.os_vif_util [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.883 230641 DEBUG os_vif [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.917 230641 DEBUG ovsdbapp.backend.ovs_idl [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.918 230641 DEBUG ovsdbapp.backend.ovs_idl [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.918 230641 DEBUG ovsdbapp.backend.ovs_idl [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.918 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.919 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.919 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.920 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.939 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.940 230641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.940 230641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:35:05 localhost nova_compute[230637]: 2025-12-02 09:35:05.941 230641 INFO oslo.privsep.daemon [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpzgrglbqo/privsep.sock']#033[00m Dec 2 04:35:06 localhost systemd[1]: session-54.scope: Deactivated successfully. Dec 2 04:35:06 localhost systemd[1]: session-54.scope: Consumed 2min 18.979s CPU time. Dec 2 04:35:06 localhost systemd-logind[757]: Session 54 logged out. Waiting for processes to exit. Dec 2 04:35:06 localhost systemd-logind[757]: Removed session 54. Dec 2 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7-merged.mount: Deactivated successfully. Dec 2 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a-userdata-shm.mount: Deactivated successfully. Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.543 230641 INFO oslo.privsep.daemon [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.423 230903 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.429 230903 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.433 230903 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.433 230903 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230903#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.836 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.837 230641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a318f6a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.837 230641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a318f6a-b3, col_values=(('external_ids', {'iface-id': '4a318f6a-b3c1-4690-8246-f7d046ccd64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:b2:03', 'vm-uuid': 'b254bb7f-2891-4b37-9c44-9700e301ce16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.838 230641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.838 230641 INFO os_vif [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3')#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.838 230641 DEBUG nova.compute.manager [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.842 230641 DEBUG nova.compute.manager [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.842 230641 INFO nova.compute.manager [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.905 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.905 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.905 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.906 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:35:06 localhost nova_compute[230637]: 2025-12-02 09:35:06.907 230641 DEBUG oslo_concurrency.processutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6764 DF PROTO=TCP SPT=57932 DPT=9102 SEQ=3257595781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A4799FEA50000000001030307) Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.352 230641 DEBUG oslo_concurrency.processutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.522 230641 DEBUG nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.522 230641 DEBUG nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.721 230641 WARNING nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.724 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12937MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.724 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.725 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.850 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.851 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.851 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.893 230641 DEBUG nova.scheduler.client.report [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.949 230641 DEBUG nova.scheduler.client.report [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.950 230641 DEBUG nova.compute.provider_tree [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:35:07 localhost nova_compute[230637]: 2025-12-02 09:35:07.992 230641 DEBUG nova.scheduler.client.report [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.028 230641 DEBUG nova.scheduler.client.report [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.067 230641 DEBUG oslo_concurrency.processutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.423 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.502 230641 DEBUG oslo_concurrency.processutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.508 230641 DEBUG nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 2 04:35:08 localhost nova_compute[230637]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.509 230641 INFO nova.virt.libvirt.host [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.511 230641 DEBUG nova.compute.provider_tree [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.512 230641 DEBUG nova.virt.libvirt.driver [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.537 230641 DEBUG nova.scheduler.client.report [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.561 230641 DEBUG nova.compute.resource_tracker [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.561 230641 DEBUG oslo_concurrency.lockutils [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.562 230641 DEBUG nova.service [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.589 230641 DEBUG nova.service [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 2 04:35:08 localhost nova_compute[230637]: 2025-12-02 09:35:08.590 230641 DEBUG nova.servicegroup.drivers.db [None req-4b1c03a2-4e75-4ab0-a7b4-31e0ad296a73 - - - - - -] DB_Driver: join new ServiceGroup member np0005541913.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 2 04:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28139 DF PROTO=TCP SPT=52202 DPT=9100 SEQ=179153176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A0A640000000001030307) Dec 2 04:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:35:10 localhost podman[230951]: 2025-12-02 09:35:10.427495184 +0000 UTC m=+0.065146525 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:35:10 localhost podman[230951]: 2025-12-02 09:35:10.459283559 +0000 UTC m=+0.096934900 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:35:10 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:35:10 localhost nova_compute[230637]: 2025-12-02 09:35:10.926 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.592 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.617 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.617 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.618 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.618 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:35:11 localhost nova_compute[230637]: 2025-12-02 09:35:11.672 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:35:11 localhost sshd[230976]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:35:12 localhost systemd-logind[757]: New session 56 of user zuul. Dec 2 04:35:12 localhost systemd[1]: Started Session 56 of User zuul. Dec 2 04:35:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23699 DF PROTO=TCP SPT=58080 DPT=9100 SEQ=3870753883 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A15E40000000001030307) Dec 2 04:35:13 localhost python3.9[231087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:35:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:35:13 localhost nova_compute[230637]: 2025-12-02 09:35:13.466 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:13 localhost podman[231092]: 2025-12-02 09:35:13.479048566 +0000 UTC m=+0.116114281 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:35:13 localhost podman[231092]: 2025-12-02 09:35:13.513831552 +0000 UTC m=+0.150897237 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:35:13 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:35:14 localhost python3.9[231220]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:35:14 localhost systemd[1]: Reloading. Dec 2 04:35:14 localhost systemd-rc-local-generator[231248]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:35:14 localhost systemd-sysv-generator[231251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:15 localhost python3.9[231364]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:35:15 localhost network[231381]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:35:15 localhost network[231382]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:35:15 localhost network[231383]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:35:15 localhost nova_compute[230637]: 2025-12-02 09:35:15.930 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28141 DF PROTO=TCP SPT=52202 DPT=9100 SEQ=179153176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A22240000000001030307) Dec 2 04:35:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:18 localhost nova_compute[230637]: 2025-12-02 09:35:18.510 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6766 DF PROTO=TCP SPT=57932 DPT=9102 SEQ=3257595781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A2DE40000000001030307) Dec 2 04:35:20 localhost nova_compute[230637]: 2025-12-02 09:35:20.935 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9496 DF PROTO=TCP SPT=41536 DPT=9101 SEQ=1365803905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A39F10000000001030307) Dec 2 04:35:23 localhost nova_compute[230637]: 2025-12-02 09:35:23.542 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:24 localhost python3.9[231704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:35:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9498 DF PROTO=TCP SPT=41536 DPT=9101 SEQ=1365803905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A45E40000000001030307) Dec 2 04:35:25 localhost nova_compute[230637]: 2025-12-02 09:35:25.939 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:26 localhost python3.9[231815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:26 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation. Dec 2 04:35:26 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:35:26 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:35:26 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:35:26 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:35:26 localhost python3.9[231926]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:27 localhost python3.9[232036]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:35:28 localhost nova_compute[230637]: 2025-12-02 09:35:28.577 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:28 localhost python3.9[232146]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:35:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9499 DF PROTO=TCP SPT=41536 DPT=9101 SEQ=1365803905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A55A40000000001030307) Dec 2 04:35:30 localhost python3.9[232256]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:35:30 localhost systemd[1]: Reloading. Dec 2 04:35:30 localhost systemd-sysv-generator[232286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:35:30 localhost systemd-rc-local-generator[232283]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:30 localhost nova_compute[230637]: 2025-12-02 09:35:30.943 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:31 localhost python3.9[232401]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:35:33 localhost python3.9[232512]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:33 localhost nova_compute[230637]: 2025-12-02 09:35:33.592 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45032 DF PROTO=TCP SPT=47504 DPT=9102 SEQ=606966644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A67AD0000000001030307) Dec 2 04:35:33 localhost python3.9[232620]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62666 DF PROTO=TCP SPT=46636 DPT=9105 SEQ=3780615674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A682E0000000001030307) Dec 2 04:35:34 localhost python3.9[232730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:35 localhost python3.9[232816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668134.1926758-360-172485861246736/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=bfc5245921d5dcd25b60f488666da7c4ada35563 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:35:35 localhost podman[232840]: 2025-12-02 09:35:35.45556416 +0000 UTC m=+0.083660078 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:35:35 localhost podman[232840]: 2025-12-02 09:35:35.494468329 +0000 UTC m=+0.122564227 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 04:35:35 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:35:35 localhost nova_compute[230637]: 2025-12-02 09:35:35.948 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:36 localhost python3.9[232946]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Dec 2 04:35:36 localhost python3.9[233056]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Dec 2 04:35:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45034 DF PROTO=TCP SPT=47504 DPT=9102 SEQ=606966644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A73A40000000001030307) Dec 2 04:35:37 localhost python3.9[233167]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 2 04:35:38 localhost nova_compute[230637]: 2025-12-02 09:35:38.629 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:38 localhost python3.9[233283]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541913.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 2 04:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34916 DF PROTO=TCP SPT=42940 DPT=9100 SEQ=3703420671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A7FA40000000001030307) Dec 2 04:35:40 localhost python3.9[233399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:40 localhost nova_compute[230637]: 2025-12-02 09:35:40.952 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:35:41 localhost podman[233457]: 2025-12-02 09:35:41.447578128 +0000 UTC m=+0.084776228 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:35:41 localhost podman[233457]: 2025-12-02 09:35:41.484043801 +0000 UTC m=+0.121241831 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:35:41 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:35:41 localhost python3.9[233498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668140.0651753-564-75473137919702/.source.conf _original_basename=ceilometer.conf follow=False checksum=9b40aa523dc31738ea523cc852832670ccea382a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:42 localhost python3.9[233618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:42 localhost python3.9[233704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668141.7342327-564-233550408512229/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63247 DF PROTO=TCP SPT=49968 DPT=9882 SEQ=4282664875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A8BA40000000001030307) Dec 2 04:35:43 localhost python3.9[233812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:43 localhost nova_compute[230637]: 2025-12-02 09:35:43.629 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:43 localhost python3.9[233898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668142.7743542-564-168919232804067/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:35:44 localhost podman[234007]: 2025-12-02 09:35:44.446154187 +0000 UTC m=+0.086195538 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 04:35:44 localhost podman[234007]: 2025-12-02 09:35:44.481347855 +0000 UTC m=+0.121389246 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:35:44 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:35:44 localhost python3.9[234006]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:45 localhost python3.9[234133]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:35:45 localhost python3.9[234241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:45 localhost nova_compute[230637]: 2025-12-02 09:35:45.955 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34918 DF PROTO=TCP SPT=42940 DPT=9100 SEQ=3703420671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479A97640000000001030307) Dec 2 04:35:46 localhost python3.9[234327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668145.4032545-741-188944311389369/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:46 localhost python3.9[234435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:47 localhost python3.9[234490]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:47 localhost python3.9[234598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:48 localhost python3.9[234684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668147.3689075-741-250313499665376/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:48 localhost nova_compute[230637]: 2025-12-02 09:35:48.631 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:48 localhost python3.9[234792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:49 localhost python3.9[234878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668148.4249544-741-8876780757638/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62670 DF PROTO=TCP SPT=46636 DPT=9105 SEQ=3780615674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479AA3E40000000001030307) Dec 2 04:35:49 localhost python3.9[234986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:50 localhost python3.9[235072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668149.4338768-741-150443456949805/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:50 localhost python3.9[235180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:50 localhost nova_compute[230637]: 2025-12-02 09:35:50.958 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:51 localhost python3.9[235266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668150.3922884-741-177399318022042/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:51 localhost python3.9[235374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:52 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51926 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=4271548448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479AAF200000000001030307) Dec 2 04:35:52 localhost python3.9[235460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668151.4334655-741-135975058895471/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:52 localhost python3.9[235568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:53 localhost python3.9[235654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668152.570758-741-174614124792098/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:53 localhost nova_compute[230637]: 2025-12-02 09:35:53.634 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:54 localhost python3.9[235762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:54 localhost python3.9[235848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668153.597405-741-253459030562561/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51928 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=4271548448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479ABB240000000001030307) Dec 2 04:35:55 localhost nova_compute[230637]: 2025-12-02 09:35:55.962 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:55 localhost python3.9[235956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:56 localhost python3.9[236042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668155.4869292-741-67139106266429/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:57 localhost python3.9[236150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:35:58 localhost python3.9[236236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668156.6512158-741-88948770972438/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:35:58 localhost nova_compute[230637]: 2025-12-02 09:35:58.636 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:35:59 localhost python3.9[236346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:35:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51929 DF PROTO=TCP SPT=52794 DPT=9101 SEQ=4271548448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479ACAE40000000001030307) Dec 2 04:35:59 localhost python3.9[236456]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:35:59 localhost systemd[1]: Reloading. Dec 2 04:35:59 localhost systemd-rc-local-generator[236480]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:35:59 localhost systemd-sysv-generator[236484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:35:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:35:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:00 localhost systemd[1]: Listening on Podman API Socket. Dec 2 04:36:00 localhost nova_compute[230637]: 2025-12-02 09:36:00.964 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:01 localhost python3.9[236605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:01 localhost python3.9[236693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.61259-1257-146607915792693/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:01 localhost python3.9[236748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:02 localhost python3.9[236836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.61259-1257-146607915792693/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:36:03.018 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:36:03.018 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:36:03.020 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:03 localhost nova_compute[230637]: 2025-12-02 09:36:03.639 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:03 localhost python3.9[236946]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Dec 2 04:36:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27093 DF PROTO=TCP SPT=53614 DPT=9102 SEQ=3585547451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479ADCDD0000000001030307) Dec 2 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39646 DF PROTO=TCP SPT=59352 DPT=9105 SEQ=3701199635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479ADD5F0000000001030307) Dec 2 04:36:04 localhost nova_compute[230637]: 2025-12-02 09:36:04.780 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:04 localhost nova_compute[230637]: 2025-12-02 09:36:04.781 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:04 localhost nova_compute[230637]: 2025-12-02 09:36:04.781 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:36:04 localhost nova_compute[230637]: 2025-12-02 09:36:04.781 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:36:04 localhost python3.9[237056]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:05 localhost nova_compute[230637]: 2025-12-02 09:36:05.970 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:36:06 localhost systemd[1]: tmp-crun.muGbOk.mount: Deactivated successfully. Dec 2 04:36:06 localhost podman[237128]: 2025-12-02 09:36:06.445848665 +0000 UTC m=+0.077942014 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:36:06 localhost podman[237128]: 2025-12-02 09:36:06.458975673 +0000 UTC m=+0.091069032 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd) Dec 2 04:36:06 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:36:06 localhost nova_compute[230637]: 2025-12-02 09:36:06.957 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:36:06 localhost nova_compute[230637]: 2025-12-02 09:36:06.957 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:36:06 localhost nova_compute[230637]: 2025-12-02 09:36:06.957 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:36:06 localhost nova_compute[230637]: 2025-12-02 09:36:06.957 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27095 DF PROTO=TCP SPT=53614 DPT=9102 SEQ=3585547451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479AE8E40000000001030307) Dec 2 04:36:07 localhost python3[237185]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:07 localhost python3[237185]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",#012 "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:21:53.58682213Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505175293,#012 "VirtualSize": 505175293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",#012 "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Dec 2 04:36:07 localhost podman[237234]: 2025-12-02 09:36:07.92202396 +0000 UTC m=+0.071424657 container remove 4dfd7f1c1fc63e05e05b9092f58ce77f66c4bad4f443866ab42107af804ab4be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '72848ce4d815e5b4e89ff3e01c5f9f7e'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 2 04:36:07 localhost python3[237185]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Dec 2 04:36:07 localhost podman[237247]: 2025-12-02 09:36:07.974948541 +0000 UTC m=+0.037292600 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 2 04:36:08 localhost podman[237247]: Dec 2 04:36:08 localhost podman[237247]: 2025-12-02 09:36:08.012284011 +0000 UTC m=+0.074628070 container create 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:36:08 localhost python3[237185]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Dec 2 04:36:08 localhost nova_compute[230637]: 2025-12-02 09:36:08.643 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:08 localhost python3.9[237392]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.242 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.267 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.267 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.267 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.268 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.268 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.268 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.268 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.269 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.269 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.269 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.285 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.286 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.286 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.286 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.286 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:36:09 localhost python3.9[237524]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.701 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.913 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:36:09 localhost nova_compute[230637]: 2025-12-02 09:36:09.914 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:36:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11273 DF PROTO=TCP SPT=60532 DPT=9100 SEQ=1810058814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479AF4E50000000001030307) Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.122 230641 WARNING nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.123 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12928MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.123 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.123 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:36:10 localhost python3.9[237635]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668169.6609342-1449-157646911716245/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.427 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.428 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.428 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.483 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.918 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.925 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.947 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.950 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:36:10 localhost nova_compute[230637]: 2025-12-02 09:36:10.950 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:36:11 localhost nova_compute[230637]: 2025-12-02 09:36:11.001 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:11 localhost python3.9[237710]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:36:11 localhost systemd[1]: Reloading. Dec 2 04:36:11 localhost systemd-sysv-generator[237739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:11 localhost systemd-rc-local-generator[237735]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:36:11 localhost podman[237749]: 2025-12-02 09:36:11.619315774 +0000 UTC m=+0.058448694 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:36:11 localhost podman[237749]: 2025-12-02 09:36:11.654869008 +0000 UTC m=+0.094001928 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:36:11 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:36:12 localhost python3.9[237828]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:36:12 localhost systemd[1]: Reloading. Dec 2 04:36:12 localhost systemd-rc-local-generator[237856]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:12 localhost systemd-sysv-generator[237860]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:12 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 04:36:12 localhost systemd[1]: Started libcrun container. Dec 2 04:36:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ddd6aaec82e09d3b6ef171e2ac941eb72bf7461746145a1488501bf649c854/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ddd6aaec82e09d3b6ef171e2ac941eb72bf7461746145a1488501bf649c854/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:36:12 localhost podman[237868]: 2025-12-02 09:36:12.660025333 +0000 UTC m=+0.142730791 container init 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + sudo -E kolla_set_configs Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:36:12 localhost podman[237868]: 2025-12-02 09:36:12.705896482 +0000 UTC m=+0.188601890 container start 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 04:36:12 localhost podman[237868]: ceilometer_agent_compute Dec 2 04:36:12 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Validating config file Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Copying service configuration files Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: INFO:__main__:Writing out command to execute Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: ++ cat /run_command Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + ARGS= Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + sudo kolla_copy_cacerts Dec 2 04:36:12 localhost podman[237890]: 2025-12-02 09:36:12.78944525 +0000 UTC m=+0.079150626 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + [[ ! -n '' ]] Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + . kolla_extend_start Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + umask 0022 Dec 2 04:36:12 localhost ceilometer_agent_compute[237880]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 2 04:36:12 localhost podman[237890]: 2025-12-02 09:36:12.822066858 +0000 UTC m=+0.111772254 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm) Dec 2 04:36:12 localhost podman[237890]: unhealthy Dec 2 04:36:12 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:12 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Failed with result 'exit-code'. Dec 2 04:36:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45363 DF PROTO=TCP SPT=43532 DPT=9882 SEQ=1779290621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479B00E40000000001030307) Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.509 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.510 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.511 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.512 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.513 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.514 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.517 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.520 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.523 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.541 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.543 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.544 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:13 localhost python3.9[238021]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:36:13 localhost nova_compute[230637]: 2025-12-02 09:36:13.644 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.648 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:13 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.714 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.715 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.716 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.717 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.718 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.719 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.720 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.721 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.722 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.724 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.726 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.727 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.728 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.729 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.730 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.731 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.732 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.734 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.742 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.755 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.856 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.856 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Dec 2 04:36:13 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:13.857 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.116 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}732655fd94880f2cb79d6c2d7618e43553cd830a8505bfe543b3beb2043aad73" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.216 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Tue, 02 Dec 2025 09:36:14 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8d02e764-7805-4f9c-ad21-a8ed3c0f0e19 x-openstack-request-id: req-8d02e764-7805-4f9c-ad21-a8ed3c0f0e19 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.216 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "45a99238-6f19-4f9e-be82-6ef3af1dcb31", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.216 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-8d02e764-7805-4f9c-ad21-a8ed3c0f0e19 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.219 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}732655fd94880f2cb79d6c2d7618e43553cd830a8505bfe543b3beb2043aad73" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.246 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Tue, 02 Dec 2025 09:36:14 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b2cf5dcc-c772-4a60-8f04-859f032aad6b x-openstack-request-id: req-b2cf5dcc-c772-4a60-8f04-859f032aad6b _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.246 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "45a99238-6f19-4f9e-be82-6ef3af1dcb31", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.246 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31 used request id req-b2cf5dcc-c772-4a60-8f04-859f032aad6b request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.248 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.249 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.253 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b254bb7f-2891-4b37-9c44-9700e301ce16 / tap4a318f6a-b3 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.254 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f9671c9a-b35d-4eda-afa9-673561907c85', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.249217', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '583bec56-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': 'c6c114e6a73922f4a5eb39892eb243d69261a422cf05918d8753520ecf3d0aca'}]}, 'timestamp': '2025-12-02 09:36:14.255748', '_unique_id': 'be1253fff72b4a00bb7995df37afc65d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.263 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.268 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.268 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8793f53-e9b8-4667-824c-aa9bdd5871b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.268266', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '583df91a-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': 'b5a6c02117e884f0c09694022f655b0cc212d3a49e4b4750e731fc966548e842'}]}, 'timestamp': '2025-12-02 09:36:14.268968', '_unique_id': '35e7b5badfd648768aad2f4e964e677f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.270 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.271 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.271 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1ee184a-c8cb-45d1-86d8-a35d3c3359cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.271870', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '583e8236-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': '93e03528d9f855dbbdf1d55efa508fcd6697b995b07bd2d3d5427ee38e368018'}]}, 'timestamp': '2025-12-02 09:36:14.272452', '_unique_id': 'b2857a52c8764f32bb14a2386624d152'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.273 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.275 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.275 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5fb7cdab-2b60-42e8-a8bb-85430e9be17e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.275377', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '583f0b8e-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': 'b9528425c9ef1dd14184ab43125467d2c9851704db5b1295ae613b39e8f1d8d2'}]}, 'timestamp': '2025-12-02 09:36:14.275966', '_unique_id': 'eb6adac3ecfc49838ab8f0cea774339d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.277 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.278 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.316 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.317 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4de888c2-a866-41e4-bb60-d676f81e1dab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.279027', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584562fe-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': 'a855d23d30179f25f95c660f00cfc09eab516cc760774d742470c0e138a350ed'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.279027', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '58456de4-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '539accf333fe0423ecf1b98e7f000883b356b1031fc6be33fa0e7035b3e66088'}]}, 'timestamp': '2025-12-02 09:36:14.317712', '_unique_id': '66fd777f82934b79ac8253fa10be1889'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.318 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.319 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.319 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '44aafa2c-1012-4322-875a-8a40773442c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.319450', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '5845be7a-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': '8723a344d6a9dfeb09acb493b00a91e6f8ac9a9c85b26f6f3b829564e585f418'}]}, 'timestamp': '2025-12-02 09:36:14.319716', '_unique_id': '7ba6399589774e87ab5f1c1e3a20f120'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.320 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 73900032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8fd2d381-5e1d-4580-8363-820faa313817', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73900032, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.320804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5845f2aa-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '0b3446c7e73403be1f30dc118c9f826ed9e3d42543bc16679f68523744c42a3d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.320804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5845fa52-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '7bf67f16a7938318dc91529988feefacb323d12400c2d02088c73461b23ea76c'}]}, 'timestamp': '2025-12-02 09:36:14.321208', '_unique_id': 'd746a7cde64e49e7ac02f911dcd59498'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.321 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.322 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.322 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.322 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.322 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.345 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 51600000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '96a9c44d-cab2-4f13-8944-dbd7e2a3c464', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 51600000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:36:14.322796', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '5849b0b6-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.564024255, 'message_signature': 'd2e2168b3899449cc6a224a3043d4085ffbff6061496903c5906a8b34d321457'}]}, 'timestamp': '2025-12-02 09:36:14.345572', '_unique_id': '6378ea32f7cb469f97e70959ee19e451'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.346 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 286697561 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 39228582 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9ac9181d-fa1b-475b-8fe0-8234c5910900', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 286697561, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.347007', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '5849f242-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '4d157c954ca33884c02825786766e7c2af024cf2738bd4b760a3d5f4c2454be9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 39228582, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.347007', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '5849f9f4-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '3cb68ce76858c6ed043e24f5e745873a3529fe58b137608908db5f8afedc74b9'}]}, 'timestamp': '2025-12-02 09:36:14.347413', '_unique_id': '9e835ad899fa462187ad2bead531797c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.347 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.348 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.348 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1433516318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.348 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 164656012 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ce0de5d-7010-46af-9cfa-8cba067d5191', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1433516318, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.348456', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584a2b04-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '378eb934d99e728c62ae0742859a93c74bb438bdf2bb7a1e3c53ae8082c0a2ce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 164656012, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.348456', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584a3360-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '41e031c0b2af089f8ac20fd4665e0d049a0fdcaf76cca5724459650753ea1649'}]}, 'timestamp': '2025-12-02 09:36:14.348881', '_unique_id': '5f0fe1606627428f8e9e2c05d5c3e603'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.349 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d18c086-77af-4ed6-9f9b-b2affbb440f2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.349889', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '584a62f4-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': '141d66931daa5a03e22b15d05478820e67b9f0014228e6ea007c00f3f635fec2'}]}, 'timestamp': '2025-12-02 09:36:14.350109', '_unique_id': '84e7bd85bd4d4077adfd627889db059e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.350 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '743be9b0-e7bb-4741-bda3-c7c63b3984de', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.351102', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '584a922e-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': '6566f74ba8c5e53a9b777c7c0aff812dc89edabec606246ec81a9619c829dc2e'}]}, 'timestamp': '2025-12-02 09:36:14.351319', '_unique_id': 'cd4b07197de74f498162bfddf3323e44'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.351 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.352 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e07f67d4-faaa-4805-82a9-787783d64269', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.352839', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '584ad5d6-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': 'fe6b60d534f36fcffe11b80d581a8dd0a591f0dacd5799facfb9e5703c1d9108'}]}, 'timestamp': '2025-12-02 09:36:14.353053', '_unique_id': '724f1a5d109849e2a6fbba2bc5051598'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.353 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5a6f9318-a7d0-49ed-aafb-2b2ccccdd049', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.354030', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '584b04b6-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': 'ec968e7dbb49b7b264302aa31fbbcee7ca8d59ddc2cc0c8bf913b3811039a662'}]}, 'timestamp': '2025-12-02 09:36:14.354251', '_unique_id': '1444bec25ce74c01830889731cb29254'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.354 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.355 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.355 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.355 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c63d7fbc-0978-4708-b545-d897fe85acdd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.355229', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584b3332-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': 'dc15a66f408eb5d16497b26d78d8106993a53add8fcafbd82b338d949a6c0331'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.355229', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584b3b84-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '0ad61b9e30aca39bdf630802035f736ac9f3520b5312e8ccc647745b9112cb35'}]}, 'timestamp': '2025-12-02 09:36:14.355671', '_unique_id': 'fdf458f6b38b47e4992aec252ec65ea0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.356 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.368 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.368 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0f7c4e22-8505-4101-aaf1-d031505a9f72', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.356733', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584d4226-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': '29aaf5a4943308fb512a44a06e3bb59b8d9616bd899adb2ad86697d0b1e284bb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.356733', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584d4a6e-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': '8ea11760aee8a910272b152a233eed45232e6b64ad2762f90e99088cdea4e24d'}]}, 'timestamp': '2025-12-02 09:36:14.369133', '_unique_id': 'cb9f3e16ad6145f19457ed8e8e262b4d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.369 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.370 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c93ec8f7-feca-4b0c-8a87-2b4bcc42b888', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.370532', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584d8a38-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': '1388faf98aea55da9f5ba0ab52066175b1f245cb8e6662062644e8478f578bec'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.370532', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584d91cc-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': '77cea6be55432a8aae6a60b61b0bf1de7dd9d044edfe88085d5b50479a8a650d'}]}, 'timestamp': '2025-12-02 09:36:14.370958', '_unique_id': '4a01b6a131954990a29306246496e6b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.371 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f9bbe2b-ef74-49c0-8d31-c9608418cb86', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:14.371948', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '584dc07a-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.468406627, 'message_signature': '9ae026c779e68295f69ff09229e8c8ddc7132603d94ca5527a5dd7148d430674'}]}, 'timestamp': '2025-12-02 09:36:14.372164', '_unique_id': 'be6444faa9504647b47f174564a03dbd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.372 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 52.35546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '94a252b6-412c-4608-b294-a394e584b81c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.35546875, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:36:14.373194', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '584df0e0-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.564024255, 'message_signature': '837757a18ffe39772b370aa4e522a56853b7ce7d5e252dfc50efcc579cb1a8ae'}]}, 'timestamp': '2025-12-02 09:36:14.373396', '_unique_id': '4b6e69c80b434de386d501d968a3be44'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.373 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.374 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.374 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.374 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '660e8d6b-65ed-408f-8206-c30d826a7c15', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.374367', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584e1ea8-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '99c83c30a258af158d0ac242442be8bed2f19dad016e82ed346085061e8f6007'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.374367', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584e2682-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.498296316, 'message_signature': '5c432f2693a9c6f887bfd41023af7f01c1a26d511d253fe79f26bdb64518980c'}]}, 'timestamp': '2025-12-02 09:36:14.374795', '_unique_id': '327e5b5709e5418fb1a12a5d4eb5cab4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.375 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ad449b7b-f2b0-4cb3-8bc5-344f22238c60', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:14.375775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '584e55a8-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': 'cac9e69306f15158c396aa1d57754d64543800ae7e8cd03cd291d77cce5c96ad'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:14.375775', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '584e5cd8-cf62-11f0-aa38-fa163e3f40cc', 'monotonic_time': 10336.57587362, 'message_signature': '941227a063d8f0c0a970b0d05821c936b4b2c910109dd999e7ec7085da039240'}]}, 'timestamp': '2025-12-02 09:36:14.376153', '_unique_id': '99fc5f802f3b458ca268f112ed0fdba1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.376 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:14 localhost ceilometer_agent_compute[237880]: 2025-12-02 09:36:14.388 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Dec 2 04:36:14 localhost journal[203664]: End of file while reading data: Input/output error Dec 2 04:36:14 localhost journal[203664]: End of file while reading data: Input/output error Dec 2 04:36:14 localhost systemd[1]: libpod-31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.scope: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: libpod-31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.scope: Consumed 1.326s CPU time. Dec 2 04:36:14 localhost podman[238028]: 2025-12-02 09:36:14.555460967 +0000 UTC m=+0.881929697 container died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:36:14 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.timer: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563-userdata-shm.mount: Deactivated successfully. Dec 2 04:36:14 localhost podman[238052]: 2025-12-02 09:36:14.6574713 +0000 UTC m=+0.077088493 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:36:14 localhost podman[238028]: 2025-12-02 09:36:14.679065025 +0000 UTC m=+1.005533725 container cleanup 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 2 04:36:14 localhost podman[238028]: ceilometer_agent_compute Dec 2 04:36:14 localhost podman[238052]: 2025-12-02 09:36:14.698589487 +0000 UTC m=+0.118206730 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 04:36:14 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-56ddd6aaec82e09d3b6ef171e2ac941eb72bf7461746145a1488501bf649c854-merged.mount: Deactivated successfully. Dec 2 04:36:14 localhost podman[238075]: 2025-12-02 09:36:14.778148683 +0000 UTC m=+0.060427095 container cleanup 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:36:14 localhost podman[238075]: ceilometer_agent_compute Dec 2 04:36:14 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Dec 2 04:36:14 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 2 04:36:14 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 2 04:36:14 localhost systemd[1]: Started libcrun container. Dec 2 04:36:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ddd6aaec82e09d3b6ef171e2ac941eb72bf7461746145a1488501bf649c854/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56ddd6aaec82e09d3b6ef171e2ac941eb72bf7461746145a1488501bf649c854/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 2 04:36:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:36:14 localhost podman[238086]: 2025-12-02 09:36:14.970015586 +0000 UTC m=+0.143859090 container init 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:36:14 localhost ceilometer_agent_compute[238101]: + sudo -E kolla_set_configs Dec 2 04:36:14 localhost ceilometer_agent_compute[238101]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:36:15 localhost podman[238086]: 2025-12-02 09:36:15.015752442 +0000 UTC m=+0.189595956 container start 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 04:36:15 localhost podman[238086]: ceilometer_agent_compute Dec 2 04:36:15 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Validating config file Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Copying service configuration files Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: INFO:__main__:Writing out command to execute Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: ++ cat /run_command Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + ARGS= Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + sudo kolla_copy_cacerts Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: sudo: unable to send audit message: Operation not permitted Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + [[ ! -n '' ]] Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + . kolla_extend_start Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + umask 0022 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 2 04:36:15 localhost podman[238110]: 2025-12-02 09:36:15.108733963 +0000 UTC m=+0.092252513 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:36:15 localhost podman[238110]: 2025-12-02 09:36:15.115877316 +0000 UTC m=+0.099395856 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm) Dec 2 04:36:15 localhost podman[238110]: unhealthy Dec 2 04:36:15 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Main process exited, code=exited, status=1/FAILURE Dec 2 04:36:15 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Failed with result 'exit-code'. Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.871 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.872 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.873 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.874 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.875 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.876 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.877 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.878 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.879 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.880 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.881 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.882 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.883 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.884 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.885 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.903 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.904 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.905 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 2 04:36:15 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:15.922 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:15 localhost python3.9[238240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:36:16 localhost nova_compute[230637]: 2025-12-02 09:36:16.005 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.066 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.067 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.068 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.069 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.070 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.071 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.072 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.073 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.074 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.076 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.077 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.078 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.079 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.080 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.081 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.082 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.083 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.086 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.088 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.093 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 2 04:36:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11275 DF PROTO=TCP SPT=60532 DPT=9100 SEQ=1810058814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479B0CA40000000001030307) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.438 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a966654efc63eb79f395da865ed495916856f318e31034e86d5a2b1abae24291" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 04:36:16 localhost python3.9[238334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668175.5398223-1545-119609008531271/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.504 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Tue, 02 Dec 2025 09:36:16 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a91009ba-9531-4b7d-9b7d-290113c0ab02 x-openstack-request-id: req-a91009ba-9531-4b7d-9b7d-290113c0ab02 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.505 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "45a99238-6f19-4f9e-be82-6ef3af1dcb31", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.505 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-a91009ba-9531-4b7d-9b7d-290113c0ab02 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.507 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a966654efc63eb79f395da865ed495916856f318e31034e86d5a2b1abae24291" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.523 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Tue, 02 Dec 2025 09:36:16 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-10f44dac-5fc2-4dc2-8cc6-106b849fc591 x-openstack-request-id: req-10f44dac-5fc2-4dc2-8cc6-106b849fc591 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.523 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "45a99238-6f19-4f9e-be82-6ef3af1dcb31", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.523 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31 used request id req-10f44dac-5fc2-4dc2-8cc6-106b849fc591 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.525 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.525 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.535 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.536 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd757fbc4-12ed-4a4d-bf88-8d8859e50ae3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.525935', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '599801d4-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': 'bed582c53f4d509a3545d68d326cd2b2e8fe67c20e369c64d2fbc271cd81e1e2'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.525935', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59981494-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': 'd96303b3ad2e29ec426d0efac193f11597cfc3f63bc12b1e23e69e3f43473fc6'}]}, 'timestamp': '2025-12-02 09:36:16.537113', '_unique_id': '1cbafc776a9f4843a5f1632dacc7af3b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.543 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.546 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.550 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b254bb7f-2891-4b37-9c44-9700e301ce16 / tap4a318f6a-b3 inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.550 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01d043d8-e05a-467d-b9bf-df3a9057bb3e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.546734', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '599a3760-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': 'a237c727c7c22967c6aa24e0e067006572cc5bbce638e8d4a7ddac3345fd6c91'}]}, 'timestamp': '2025-12-02 09:36:16.551094', '_unique_id': '7b592cf786564f79b160d36147a6484c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.552 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.553 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.553 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3b995e11-9850-4a4b-98ea-394ec3717d0b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.553104', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '599a93c2-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '6fe7a141641470b99c91595c27568eee954cb160054cf49eb15a69cb5a573865'}]}, 'timestamp': '2025-12-02 09:36:16.553420', '_unique_id': 'df668e9f63ad49e59ff5b4ed2046fc67'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.554 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.555 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd9818b7-68c5-4da0-a17c-2e3ed5f2ac88', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.555032', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '599adeea-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '0a58079191405013352b35ddfe0ab5b52d1d62d29101de9dafd079eafd963274'}]}, 'timestamp': '2025-12-02 09:36:16.555403', '_unique_id': '6bc246a47bb047d98e19cba110eb3964'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.556 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.596 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.597 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dc57f9fd-63ec-4ffe-9ba1-d3c4614376ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.556966', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59a1388a-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '9d4779421b4631cbaa8ef2623222eb11fea5694bf7e39c376d446ce94e9acd46'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.556966', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59a15248-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '340fbaa0477497b16abaad2eed26998eacbdc4504471ce9a2145d27b2e3def9b'}]}, 'timestamp': '2025-12-02 09:36:16.597882', '_unique_id': '04b014c0b5d14487bddc4154d1a689d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.599 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.601 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.601 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.601 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.602 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.602 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.603 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e1b9520b-87b1-4b2f-9406-1fc804124e93', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.602656', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59a22858-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '349a810e20c2b952b2d1e35e45699bcfb7850bf418b1a68820485d335deef1ed'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.602656', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59a239ce-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': 'db0e48a93a4b4802e65fffbd03c55b6ab5f46d80828556f18316925796c73bf7'}]}, 'timestamp': '2025-12-02 09:36:16.603604', '_unique_id': 'edb6e89cf5bb4cb58a7f2c11d27f1d73'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.604 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.605 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.624 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 51630000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c9c0efdc-1fad-4513-bd7c-762b522f84f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 51630000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:36:16.606088', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '59a5891c-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.843092954, 'message_signature': '4350d92703403e247b96d5de17ca595049ffd5e8c83e1de2a87c359308612944'}]}, 'timestamp': '2025-12-02 09:36:16.625473', '_unique_id': '571b5cede90148b5a0d85baf820a6506'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.627 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.629 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.629 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 82 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd9075fc8-ab28-4a55-88f9-03c49667c62c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 82, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.629371', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59a643fc-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': 'fd97341876c6905ef526b0b2bed4a6af99c155f56bcf43eb305c955708f2365e'}]}, 'timestamp': '2025-12-02 09:36:16.630127', '_unique_id': 'f49bdaa8d2a94caa980e6f1ba04d7ec3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.631 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.632 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.632 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd54f9db4-c351-4581-9e81-edb078da56b6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.632605', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59a6bb20-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '6d124f840b12f9d88048393a5940d916b5ab5d3f72a379c721bbfae7311e82c5'}]}, 'timestamp': '2025-12-02 09:36:16.633161', '_unique_id': '26f4b7e6e9144c648cc87398c9b11bb3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.634 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.635 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.635 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1433516318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.636 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 164656012 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a025a389-96f2-4115-ba47-c85903074065', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1433516318, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.635694', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59a7323a-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': 'dd31c0fc360e0ad3155c824232da4ad307abd625c306eef33057c899166818cd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 164656012, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.635694', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59a74388-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': 'a50d9d1b5ad6581cb7b0590537a4df4bd8ca60c164d254d1e44825a75e7ecf59'}]}, 'timestamp': '2025-12-02 09:36:16.636649', '_unique_id': '3a53ee5a26e84252801efbe61b53d789'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.637 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.638 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.639 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9b903ee4-3e51-4c30-9167-c7c86cc3bb5f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.639074', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59a7b566-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': 'c9ea90561b87630844b572c36accd676716f4003eeab2000cc09fcabbb85450a'}]}, 'timestamp': '2025-12-02 09:36:16.639560', '_unique_id': 'c03f582f6a054bb6b5a429f26fcd8be0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.640 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.641 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.642 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 286697561 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.642 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 39228582 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8f4a882b-224e-4367-b94a-c77bdc725cc5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 286697561, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.642030', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59a8287a-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '603866ff0da33aed7f0b0870302643506b7c818fc1f806ddccc8f066ff50d992'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 39228582, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.642030', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59a83acc-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '3da80195bc834d3c0aea18dafea103fe5b4f8275436c72f43562d381528bce9d'}]}, 'timestamp': '2025-12-02 09:36:16.642965', '_unique_id': '6d83e53a506b41d99eac096a3f9de38f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.643 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.645 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.645 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82168af6-5fd3-40dd-b37c-4dc9f8c552f8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.645297', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59a8a84a-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': 'e975020149ed277add29508d6163b1a80f0435197f2ee6dae28ad54641e438cd'}]}, 'timestamp': '2025-12-02 09:36:16.645839', '_unique_id': '1d825a3d67f34e9880c919b1fa94b49a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.646 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.647 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.648 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.648 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6ffa6c8-6402-41b5-892f-4756eaa43236', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.648066', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59a9142e-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': 'b5132f9a36502d092975ab7e72e67de563b4992a10b554abebce9f0537a2cf13'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.648066', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59a92752-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': '41761297292e5053ea8d48582e8b029e1f6a38ec4b81fefd8bfbee1129cd911d'}]}, 'timestamp': '2025-12-02 09:36:16.649084', '_unique_id': '2174d1fd65fb40bfb6383c964a5510c1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.650 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.652 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.652 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4e743d1f-7f0e-4eb5-b4da-395f394e2806', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.652401', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59a9cb62-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '1957cae905b96d1039b68a5ac6c341280872cb3d6e388ee230c19cc4a9969387'}]}, 'timestamp': '2025-12-02 09:36:16.653340', '_unique_id': '11692eb1f4464711a608da4a0b6ffb87'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.654 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.655 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.655 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.655 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b52649ca-9581-465f-a735-ca30ac2e70aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.655553', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59aa385e-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': 'c53ca9ec4336f02ced723e7af085ffdcd2f7a7616a9fe7c58a7379f45963affd'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.655553', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59aa461e-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.745062044, 'message_signature': '590606e95ec970d5aef66a896bf238383804dfe089de7df26249277e1fd2426d'}]}, 'timestamp': '2025-12-02 09:36:16.656264', '_unique_id': 'a6a2328f2ee14eb486a602433e7de811'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.656 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.657 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.657 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 73900032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.658 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4f5dbcaf-01d6-493f-bfa7-118537f28aa3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73900032, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.657813', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59aa8dfe-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': 'f92b38957eda90482edfdb49ae8a5c67f611225744ea2cf89aa2f5062d7a22ea'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.657813', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59aa9894-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '9e0e18eb09947a89783e80b448effd48a3a621d5ab2e7ef754244afc13ec8c43'}]}, 'timestamp': '2025-12-02 09:36:16.658373', '_unique_id': '1415e9215b9f4cc5bceeb3345572131c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.659 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.660 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.660 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 52.35546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b6461f87-e100-46b4-9328-53468585c35b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.35546875, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:36:16.660209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '59aaeb82-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.843092954, 'message_signature': '68081e69af8915757b98677613389d88a7bd48600d5ab65cf72f90c6accfe756'}]}, 'timestamp': '2025-12-02 09:36:16.660505', '_unique_id': 'c903b63704694042b6a77e25276aac6c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.661 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 8783 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5db2c948-44a7-46e4-bb56-3639177f15a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 8783, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.661975', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59ab3042-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '49bc20e93582777eceb8b271e7485a93c6076915a3ac81b87db28dfcbbd12450'}]}, 'timestamp': '2025-12-02 09:36:16.662274', '_unique_id': '2be1aaabee064a87bc81724c1311cc67'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.662 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.663 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.663 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.663 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.663 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.664 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.664 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.664 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.664 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.664 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a427cb59-f031-4528-b5d4-e9258ad277b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:36:16.664401', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '59ab8f1a-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': '796a67ec424b7516d2beba3b2ebb1c65b1a95f170c6b7f87025abc44ba56b42c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:36:16.664401', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '59ab9b90-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.776104782, 'message_signature': 'b9d96f045d45b495c0e5376544d8cb5a1645ead5d211dc3477d27b0e9192ef84'}]}, 'timestamp': '2025-12-02 09:36:16.665008', '_unique_id': 'ce1a566e08314be9800173c7b820b918'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.665 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.666 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.666 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6b3ed643-4ee2-4b4c-8dce-407038f5030d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:36:16.666663', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '59abe852-cf62-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10338.76591754, 'message_signature': '82aa8c0d0998c010644d7dbcebd29da248c372507fe54f3408a561128d69578e'}]}, 'timestamp': '2025-12-02 09:36:16.666994', '_unique_id': 'd4308881449c4f619cd8fd316fd816a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:36:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:36:16.667 12 ERROR oslo_messaging.notify.messaging Dec 2 04:36:17 localhost python3.9[238444]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Dec 2 04:36:18 localhost python3.9[238554]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:36:18 localhost nova_compute[230637]: 2025-12-02 09:36:18.646 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:19 localhost python3[238664]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:36:19 localhost podman[238701]: Dec 2 04:36:19 localhost podman[238701]: 2025-12-02 09:36:19.326311563 +0000 UTC m=+0.073070009 container create 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors , config_id=edpm) Dec 2 04:36:19 localhost podman[238701]: 2025-12-02 09:36:19.289553968 +0000 UTC m=+0.036312414 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 2 04:36:19 localhost python3[238664]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Dec 2 04:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27097 DF PROTO=TCP SPT=53614 DPT=9102 SEQ=3585547451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479B19E50000000001030307) Dec 2 04:36:20 localhost python3.9[238849]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:36:21 localhost nova_compute[230637]: 2025-12-02 09:36:21.009 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:21 localhost python3.9[238972]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:22 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16874 DF PROTO=TCP SPT=51228 DPT=9101 SEQ=2887775470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479B24500000000001030307) Dec 2 04:36:22 localhost python3.9[239127]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668181.720109-1704-58902281780546/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:36:22 localhost python3.9[239194]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:36:22 localhost systemd[1]: Reloading. Dec 2 04:36:22 localhost systemd-rc-local-generator[239220]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:22 localhost systemd-sysv-generator[239223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost nova_compute[230637]: 2025-12-02 09:36:23.661 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:36:23 localhost python3.9[239303]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:36:23 localhost systemd[1]: Reloading. Dec 2 04:36:23 localhost systemd-sysv-generator[239328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:36:23 localhost systemd-rc-local-generator[239324]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:36:24 localhost systemd[1]: Starting node_exporter container... Dec 2 04:36:24 localhost systemd[1]: Started libcrun container. Dec 2 04:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:36:24 localhost podman[239344]: 2025-12-02 09:36:24.391330584 +0000 UTC m=+0.154455923 container init 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.407Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.407Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.407Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.408Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=arp Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=bcache Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=bonding Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=btrfs Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=conntrack Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=cpu Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=cpufreq Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=diskstats Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=edac Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=fibrechannel Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=filefd Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=filesystem Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=infiniband Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=ipvs Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=loadavg Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=mdadm Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=meminfo Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=netclass Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=netdev Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=netstat Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=nfs Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=nfsd Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=nvme Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=schedstat Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=sockstat Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=softnet Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=systemd Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=tapestats Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=udp_queues Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=vmstat Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=xfs Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.409Z caller=node_exporter.go:117 level=info collector=zfs Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.410Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Dec 2 04:36:24 localhost node_exporter[239357]: ts=2025-12-02T09:36:24.410Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Dec 2 04:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:36:24 localhost podman[239344]: 2025-12-02 09:36:24.421808747 +0000 UTC m=+0.184934106 container start 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:36:24 localhost podman[239344]: node_exporter Dec 2 04:36:24 localhost systemd[1]: Started node_exporter container. Dec 2 04:36:24 localhost podman[239366]: 2025-12-02 09:36:24.494474886 +0000 UTC m=+0.070110164 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:36:24 localhost podman[239366]: 2025-12-02 09:36:24.508042125 +0000 UTC m=+0.083677383 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:36:24 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:36:25 localhost python3.9[239498]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:36:25 localhost systemd[1]: Stopping node_exporter container... Dec 2 04:36:25 localhost systemd[1]: libpod-89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.scope: Deactivated successfully. Dec 2 04:36:25 localhost podman[239502]: 2025-12-02 09:36:25.269014901 +0000 UTC m=+0.078896820 container died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:36:25 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.timer: Deactivated successfully. Dec 2 04:36:25 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:36:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16876 DF PROTO=TCP SPT=51228 DPT=9101 SEQ=2887775470 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A479B30640000000001030307) Dec 2 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e-userdata-shm.mount: Deactivated successfully. Dec 2 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-dfdf2ab7fe5ce6537ec1c19b07cda773ff79d984ef31b505b40a5e19ca784be0-merged.mount: Deactivated successfully. Dec 2 04:36:25 localhost podman[239502]: 2025-12-02 09:36:25.371736761 +0000 UTC m=+0.181618680 container cleanup 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:25 localhost podman[239502]: node_exporter Dec 2 04:36:25 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 2 04:36:25 localhost podman[239531]: 2025-12-02 09:36:25.472956374 +0000 UTC m=+0.066658504 container cleanup 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:36:25 localhost podman[239531]: node_exporter Dec 2 04:36:25 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Dec 2 04:36:25 localhost systemd[1]: Stopped node_exporter container. Dec 2 04:36:25 localhost systemd[1]: Starting node_exporter container... Dec 2 04:36:25 localhost systemd[1]: Started libcrun container. Dec 2 04:36:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:36:25 localhost podman[239544]: 2025-12-02 09:36:25.599346034 +0000 UTC m=+0.102094366 container init 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.611Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.611Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.611Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.611Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.611Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 2 04:36:25 localhost node_exporter[239558]: ts=2025-12-02T09:36:25.612Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 2 04:43:11 localhost nova_compute[230637]: 2025-12-02 09:43:11.027 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:43:11 localhost rsyslogd[754]: imjournal: 6666 messages lost due to rate-limiting (20000 allowed within 600 seconds) Dec 2 04:43:11 localhost nova_compute[230637]: 2025-12-02 09:43:11.034 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:43:11 localhost nova_compute[230637]: 2025-12-02 09:43:11.052 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:43:11 localhost nova_compute[230637]: 2025-12-02 09:43:11.055 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:43:11 localhost nova_compute[230637]: 2025-12-02 09:43:11.055 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:43:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39177 DF PROTO=TCP SPT=37708 DPT=9102 SEQ=1820381988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A161650000000001030307) Dec 2 04:43:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:43:12 localhost systemd[1]: tmp-crun.T5NsoB.mount: Deactivated successfully. Dec 2 04:43:12 localhost podman[256879]: 2025-12-02 09:43:12.458426984 +0000 UTC m=+0.096250970 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:43:12 localhost podman[256879]: 2025-12-02 09:43:12.466004238 +0000 UTC m=+0.103828244 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 04:43:12 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.057 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.058 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.058 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.058 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.119 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.119 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.120 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.120 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.275 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.460 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.658 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.659 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.660 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.660 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.661 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:13 localhost nova_compute[230637]: 2025-12-02 09:43:13.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:14 localhost python3.9[257006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 2 04:43:15 localhost nova_compute[230637]: 2025-12-02 09:43:15.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:15 localhost nova_compute[230637]: 2025-12-02 09:43:15.776 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:16 localhost python3.9[257119]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:16 localhost nova_compute[230637]: 2025-12-02 09:43:16.718 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:43:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:43:17 localhost podman[257230]: 2025-12-02 09:43:17.148537096 +0000 UTC m=+0.089369264 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, config_id=edpm, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:43:17 localhost podman[257230]: 2025-12-02 09:43:17.167207239 +0000 UTC m=+0.108039437 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:43:17 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:43:17 localhost python3.9[257229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:17 localhost python3.9[257359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:18 localhost nova_compute[230637]: 2025-12-02 09:43:18.317 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:18 localhost python3.9[257469]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:19 localhost python3.9[257579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39178 DF PROTO=TCP SPT=37708 DPT=9102 SEQ=1820381988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A181E40000000001030307) Dec 2 04:43:19 localhost python3.9[257689]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:43:20 localhost systemd[1]: tmp-crun.2RDqvo.mount: Deactivated successfully. Dec 2 04:43:20 localhost podman[257799]: 2025-12-02 09:43:20.177875671 +0000 UTC m=+0.097299678 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:43:20 localhost podman[257799]: 2025-12-02 09:43:20.194069548 +0000 UTC m=+0.113493615 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:43:20 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:43:20 localhost python3.9[257800]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:20 localhost nova_compute[230637]: 2025-12-02 09:43:20.779 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:21 localhost python3.9[257930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:21 localhost python3.9[258018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668600.5928473-279-193908501822782/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:22 localhost python3.9[258126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:22 localhost python3.9[258212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668602.0470908-324-45132693326891/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:23 localhost nova_compute[230637]: 2025-12-02 09:43:23.321 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:23 localhost python3.9[258320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:24 localhost python3.9[258406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668603.1381304-324-133461508350307/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:24 localhost python3.9[258514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:25 localhost python3.9[258600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668604.2490718-324-213312502282350/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=694f6cc59ea78cd881696e3f3cdb6845e6a84456 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:25 localhost nova_compute[230637]: 2025-12-02 09:43:25.782 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:26 localhost python3.9[258708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:26 localhost python3.9[258794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668605.9838195-498-258976508411/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=d6e803f833d8b5f768d3a3c0112defa742aeec55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:43:27 localhost podman[258901]: 2025-12-02 09:43:27.441934685 +0000 UTC m=+0.077612367 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:43:27 localhost podman[258901]: 2025-12-02 09:43:27.45584664 +0000 UTC m=+0.091524322 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:43:27 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:43:27 localhost python3.9[258903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:28 localhost python3.9[259009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668607.1376557-543-82289181862901/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:28 localhost nova_compute[230637]: 2025-12-02 09:43:28.368 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:28 localhost python3.9[259117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:29 localhost python3.9[259203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668608.1425648-543-6165996118472/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:29 localhost python3.9[259311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:43:29 localhost systemd[1]: tmp-crun.IsRqRE.mount: Deactivated successfully. Dec 2 04:43:29 localhost podman[259312]: 2025-12-02 09:43:29.968742462 +0000 UTC m=+0.093413463 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:43:29 localhost podman[259312]: 2025-12-02 09:43:29.980930472 +0000 UTC m=+0.105601513 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:43:29 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:43:30 localhost podman[259353]: 2025-12-02 09:43:30.085850614 +0000 UTC m=+0.079430495 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Dec 2 04:43:30 localhost podman[259353]: 2025-12-02 09:43:30.122090522 +0000 UTC m=+0.115670403 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:43:30 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:43:30 localhost python3.9[259405]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:30 localhost nova_compute[230637]: 2025-12-02 09:43:30.784 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:30 localhost python3.9[259522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:31 localhost python3.9[259608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668610.4084315-630-163921474061312/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:31 localhost python3.9[259716]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:32 localhost python3.9[259828]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:33 localhost python3.9[259938]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:33 localhost nova_compute[230637]: 2025-12-02 09:43:33.371 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:33 localhost python3.9[259995]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41767 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1BAAD0000000001030307) Dec 2 04:43:34 localhost openstack_network_exporter[242845]: ERROR 09:43:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:34 localhost openstack_network_exporter[242845]: ERROR 09:43:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:43:34 localhost openstack_network_exporter[242845]: ERROR 09:43:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:43:34 localhost openstack_network_exporter[242845]: ERROR 09:43:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:43:34 localhost openstack_network_exporter[242845]: Dec 2 04:43:34 localhost openstack_network_exporter[242845]: ERROR 09:43:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:43:34 localhost openstack_network_exporter[242845]: Dec 2 04:43:34 localhost python3.9[260106]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:34 localhost python3.9[260163]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41768 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1BEA40000000001030307) Dec 2 04:43:35 localhost python3.9[260273]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39179 DF PROTO=TCP SPT=37708 DPT=9102 SEQ=1820381988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1C1E40000000001030307) Dec 2 04:43:35 localhost nova_compute[230637]: 2025-12-02 09:43:35.809 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:36 localhost podman[240799]: time="2025-12-02T09:43:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:43:36 localhost podman[240799]: @ - - [02/Dec/2025:09:43:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146520 "" "Go-http-client/1.1" Dec 2 04:43:36 localhost python3.9[260383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:36 localhost podman[240799]: @ - - [02/Dec/2025:09:43:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16781 "" "Go-http-client/1.1" Dec 2 04:43:36 localhost python3.9[260440]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41769 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1C6A40000000001030307) Dec 2 04:43:37 localhost python3.9[260550]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:37 localhost python3.9[260607]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59705 DF PROTO=TCP SPT=43938 DPT=9102 SEQ=553238560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1C9E40000000001030307) Dec 2 04:43:38 localhost python3.9[260717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:38 localhost systemd[1]: Reloading. Dec 2 04:43:38 localhost nova_compute[230637]: 2025-12-02 09:43:38.399 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:38 localhost systemd-rc-local-generator[260743]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:38 localhost systemd-sysv-generator[260747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:39 localhost python3.9[260865]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:43:39 localhost systemd[1]: tmp-crun.YdzhaG.mount: Deactivated successfully. Dec 2 04:43:39 localhost podman[260923]: 2025-12-02 09:43:39.84649816 +0000 UTC m=+0.109835076 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:43:39 localhost podman[260923]: 2025-12-02 09:43:39.883836629 +0000 UTC m=+0.147173565 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:43:39 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:43:39 localhost python3.9[260922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:40 localhost python3.9[261049]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:40 localhost nova_compute[230637]: 2025-12-02 09:43:40.810 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41770 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1D6640000000001030307) Dec 2 04:43:41 localhost python3.9[261106]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:41 localhost python3.9[261216]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:41 localhost systemd[1]: Reloading. Dec 2 04:43:41 localhost systemd-sysv-generator[261242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:41 localhost systemd-rc-local-generator[261239]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:42 localhost systemd[1]: Starting Create netns directory... Dec 2 04:43:42 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:43:42 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:43:42 localhost systemd[1]: Finished Create netns directory. Dec 2 04:43:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:43:43 localhost podman[261367]: 2025-12-02 09:43:43.013107231 +0000 UTC m=+0.076546598 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 2 04:43:43 localhost podman[261367]: 2025-12-02 09:43:43.047091749 +0000 UTC m=+0.110531086 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:43:43 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:43:43 localhost python3.9[261368]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:43:43 localhost nova_compute[230637]: 2025-12-02 09:43:43.402 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:43 localhost python3.9[261495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:43:44 localhost python3.9[261583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668623.354793-1074-100794677055274/.source.json _original_basename=.ad3i5_vm follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:45 localhost python3.9[261693]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:45 localhost nova_compute[230637]: 2025-12-02 09:43:45.852 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:43:47 localhost podman[262002]: 2025-12-02 09:43:47.311007365 +0000 UTC m=+0.092556820 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41) Dec 2 04:43:47 localhost podman[262002]: 2025-12-02 09:43:47.350788899 +0000 UTC m=+0.132338274 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, distribution-scope=public) Dec 2 04:43:47 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:43:47 localhost python3.9[262001]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Dec 2 04:43:48 localhost python3.9[262131]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:43:48 localhost nova_compute[230637]: 2025-12-02 09:43:48.441 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41771 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A1F5E40000000001030307) Dec 2 04:43:49 localhost python3.9[262241]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:43:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:43:50 localhost systemd[1]: tmp-crun.RR2Xpp.mount: Deactivated successfully. Dec 2 04:43:50 localhost podman[262340]: 2025-12-02 09:43:50.471220853 +0000 UTC m=+0.099887928 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:43:50 localhost podman[262340]: 2025-12-02 09:43:50.478367916 +0000 UTC m=+0.107035001 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:43:50 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:43:50 localhost nova_compute[230637]: 2025-12-02 09:43:50.854 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:53 localhost nova_compute[230637]: 2025-12-02 09:43:53.476 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:53 localhost python3[262487]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:43:54 localhost podman[262527]: Dec 2 04:43:54 localhost podman[262527]: 2025-12-02 09:43:54.027543367 +0000 UTC m=+0.084088411 container create 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, config_id=neutron_dhcp, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 04:43:54 localhost podman[262527]: 2025-12-02 09:43:53.982137012 +0000 UTC m=+0.038682076 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:43:54 localhost python3[262487]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 04:43:54 localhost python3.9[262673]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:55 localhost python3.9[262785]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:55 localhost nova_compute[230637]: 2025-12-02 09:43:55.898 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:56 localhost python3.9[262840]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:43:56 localhost python3.9[262949]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668636.0603197-1338-56051706117139/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:43:57 localhost python3.9[263004]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:43:57 localhost systemd[1]: Reloading. Dec 2 04:43:57 localhost systemd-rc-local-generator[263030]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:57 localhost systemd-sysv-generator[263033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:43:57 localhost podman[263041]: 2025-12-02 09:43:57.678027512 +0000 UTC m=+0.084610395 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:43:57 localhost podman[263041]: 2025-12-02 09:43:57.696066639 +0000 UTC m=+0.102649522 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 2 04:43:57 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:43:58 localhost python3.9[263114]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:43:58 localhost systemd[1]: Reloading. Dec 2 04:43:58 localhost systemd-rc-local-generator[263143]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:43:58 localhost systemd-sysv-generator[263147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:43:58 localhost nova_compute[230637]: 2025-12-02 09:43:58.506 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:43:58 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 2 04:43:58 localhost systemd[1]: Started libcrun container. Dec 2 04:43:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684a301db60ff2842f4bca90ed755fa7238e90727cf60535fb02a8b75505de20/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684a301db60ff2842f4bca90ed755fa7238e90727cf60535fb02a8b75505de20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:43:58 localhost podman[263155]: 2025-12-02 09:43:58.776421597 +0000 UTC m=+0.149982321 container init 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:43:58 localhost podman[263155]: 2025-12-02 09:43:58.788086632 +0000 UTC m=+0.161647356 container start 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=neutron_dhcp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 04:43:58 localhost podman[263155]: neutron_dhcp_agent Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + sudo -E kolla_set_configs Dec 2 04:43:58 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Validating config file Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Copying service configuration files Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Writing out command to execute Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/595e1c9b-709c-41d2-9212-0b18b13291a8.conf Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: ++ cat /run_command Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + CMD=/usr/bin/neutron-dhcp-agent Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + ARGS= Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + sudo kolla_copy_cacerts Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + [[ ! -n '' ]] Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + . kolla_extend_start Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + umask 0022 Dec 2 04:43:58 localhost neutron_dhcp_agent[263169]: + exec /usr/bin/neutron-dhcp-agent Dec 2 04:44:00 localhost neutron_dhcp_agent[263169]: 2025-12-02 09:44:00.194 263173 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:44:00 localhost neutron_dhcp_agent[263169]: 2025-12-02 09:44:00.194 263173 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Dec 2 04:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:44:00 localhost podman[263293]: 2025-12-02 09:44:00.439501256 +0000 UTC m=+0.079648221 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:44:00 localhost podman[263293]: 2025-12-02 09:44:00.451279214 +0000 UTC m=+0.091426219 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:44:00 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:44:00 localhost podman[263294]: 2025-12-02 09:44:00.510365239 +0000 UTC m=+0.148652594 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:44:00 localhost podman[263294]: 2025-12-02 09:44:00.57708454 +0000 UTC m=+0.215371885 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:44:00 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:44:00 localhost neutron_dhcp_agent[263169]: 2025-12-02 09:44:00.624 263173 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 04:44:00 localhost python3.9[263295]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:44:00 localhost systemd[1]: Stopping neutron_dhcp_agent container... Dec 2 04:44:00 localhost nova_compute[230637]: 2025-12-02 09:44:00.904 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:01 localhost systemd[1]: libpod-12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa.scope: Deactivated successfully. Dec 2 04:44:01 localhost systemd[1]: libpod-12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa.scope: Consumed 2.381s CPU time. Dec 2 04:44:01 localhost podman[263346]: 2025-12-02 09:44:01.366426471 +0000 UTC m=+0.607948214 container died 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay-684a301db60ff2842f4bca90ed755fa7238e90727cf60535fb02a8b75505de20-merged.mount: Deactivated successfully. Dec 2 04:44:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa-userdata-shm.mount: Deactivated successfully. Dec 2 04:44:01 localhost podman[263346]: 2025-12-02 09:44:01.467593732 +0000 UTC m=+0.709115445 container cleanup 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:44:01 localhost podman[263346]: neutron_dhcp_agent Dec 2 04:44:01 localhost nova_compute[230637]: 2025-12-02 09:44:01.526 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:01.522 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:44:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:01.523 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 04:44:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:01.525 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:44:01 localhost podman[263384]: error opening file `/run/crun/12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa/status`: No such file or directory Dec 2 04:44:01 localhost podman[263373]: 2025-12-02 09:44:01.581447786 +0000 UTC m=+0.074150263 container cleanup 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:44:01 localhost podman[263373]: neutron_dhcp_agent Dec 2 04:44:01 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Dec 2 04:44:01 localhost systemd[1]: Stopped neutron_dhcp_agent container. Dec 2 04:44:01 localhost systemd[1]: Starting neutron_dhcp_agent container... Dec 2 04:44:01 localhost systemd[1]: Started libcrun container. Dec 2 04:44:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684a301db60ff2842f4bca90ed755fa7238e90727cf60535fb02a8b75505de20/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684a301db60ff2842f4bca90ed755fa7238e90727cf60535fb02a8b75505de20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:44:01 localhost podman[263388]: 2025-12-02 09:44:01.733996444 +0000 UTC m=+0.116686460 container init 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=neutron_dhcp_agent, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:44:01 localhost podman[263388]: 2025-12-02 09:44:01.742221407 +0000 UTC m=+0.124911423 container start 12fb0e38ef14daf14b83aea79b0f1cf6caa8dbcc2691abc83accd331da1c7afa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b920ccd57f1789bca419f0fa8ccb82ff7492e8e96fd044e96be9ed18906b094d'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=neutron_dhcp_agent, config_id=neutron_dhcp) Dec 2 04:44:01 localhost podman[263388]: neutron_dhcp_agent Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + sudo -E kolla_set_configs Dec 2 04:44:01 localhost systemd[1]: Started neutron_dhcp_agent container. Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Validating config file Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Copying service configuration files Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Writing out command to execute Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06 Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/595e1c9b-709c-41d2-9212-0b18b13291a8.conf Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: ++ cat /run_command Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + CMD=/usr/bin/neutron-dhcp-agent Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + ARGS= Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + sudo kolla_copy_cacerts Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + [[ ! -n '' ]] Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + . kolla_extend_start Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: Running command: '/usr/bin/neutron-dhcp-agent' Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + umask 0022 Dec 2 04:44:01 localhost neutron_dhcp_agent[263402]: + exec /usr/bin/neutron-dhcp-agent Dec 2 04:44:02 localhost systemd[1]: session-59.scope: Deactivated successfully. Dec 2 04:44:02 localhost systemd[1]: session-59.scope: Consumed 35.841s CPU time. Dec 2 04:44:02 localhost systemd-logind[757]: Session 59 logged out. Waiting for processes to exit. Dec 2 04:44:02 localhost systemd-logind[757]: Removed session 59. Dec 2 04:44:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:03.026 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:03.027 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:44:03.028 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.083 263406 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.084 263406 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.494 263406 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 04:44:03 localhost nova_compute[230637]: 2025-12-02 09:44:03.549 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.609 263406 INFO neutron.agent.dhcp.agent [None req-0b0597ea-3ae0-4335-aeba-5e8799b0c87f - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.610 263406 INFO neutron.agent.dhcp.agent [None req-0b0597ea-3ae0-4335-aeba-5e8799b0c87f - - - - - -] Synchronizing state complete#033[00m Dec 2 04:44:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 09:44:03.670 263406 INFO neutron.agent.dhcp.agent [None req-0b0597ea-3ae0-4335-aeba-5e8799b0c87f - - - - - -] DHCP agent started#033[00m Dec 2 04:44:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19940 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A22FDD0000000001030307) Dec 2 04:44:04 localhost openstack_network_exporter[242845]: ERROR 09:44:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:04 localhost openstack_network_exporter[242845]: ERROR 09:44:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:04 localhost openstack_network_exporter[242845]: ERROR 09:44:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:44:04 localhost openstack_network_exporter[242845]: ERROR 09:44:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:44:04 localhost openstack_network_exporter[242845]: Dec 2 04:44:04 localhost openstack_network_exporter[242845]: ERROR 09:44:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:44:04 localhost openstack_network_exporter[242845]: Dec 2 04:44:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19941 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A233E40000000001030307) Dec 2 04:44:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41772 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A235E40000000001030307) Dec 2 04:44:05 localhost nova_compute[230637]: 2025-12-02 09:44:05.904 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:06 localhost podman[240799]: time="2025-12-02T09:44:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:44:06 localhost podman[240799]: @ - - [02/Dec/2025:09:44:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:44:06 localhost podman[240799]: @ - - [02/Dec/2025:09:44:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17212 "" "Go-http-client/1.1" Dec 2 04:44:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19942 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A23BE40000000001030307) Dec 2 04:44:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39180 DF PROTO=TCP SPT=37708 DPT=9102 SEQ=1820381988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A23FE40000000001030307) Dec 2 04:44:08 localhost nova_compute[230637]: 2025-12-02 09:44:08.554 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.740 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.741 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.741 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.741 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:44:09 localhost nova_compute[230637]: 2025-12-02 09:44:09.741 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.178 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.243 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.245 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:44:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.447 230641 WARNING nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.448 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12137MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.448 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.449 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:44:10 localhost podman[263457]: 2025-12-02 09:44:10.462769724 +0000 UTC m=+0.102064566 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:44:10 localhost podman[263457]: 2025-12-02 09:44:10.468847368 +0000 UTC m=+0.108142200 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 2 04:44:10 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.511 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.511 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.511 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.552 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.908 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.967 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:44:10 localhost nova_compute[230637]: 2025-12-02 09:44:10.973 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:44:11 localhost nova_compute[230637]: 2025-12-02 09:44:11.001 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:44:11 localhost nova_compute[230637]: 2025-12-02 09:44:11.003 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:44:11 localhost nova_compute[230637]: 2025-12-02 09:44:11.003 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:44:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19943 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A24BA50000000001030307) Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.003 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.004 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.004 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.093 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.094 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.094 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.094 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:44:13 localhost systemd[1]: tmp-crun.J2qoNh.mount: Deactivated successfully. Dec 2 04:44:13 localhost podman[263498]: 2025-12-02 09:44:13.452920983 +0000 UTC m=+0.093032644 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS) Dec 2 04:44:13 localhost podman[263498]: 2025-12-02 09:44:13.463086816 +0000 UTC m=+0.103198527 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 04:44:13 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.556 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.658 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.674 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.675 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.675 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.676 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.676 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.676 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.723 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:13 localhost nova_compute[230637]: 2025-12-02 09:44:13.723 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:15 localhost nova_compute[230637]: 2025-12-02 09:44:15.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:44:15 localhost nova_compute[230637]: 2025-12-02 09:44:15.963 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.098 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.099 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.115 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.116 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2432c842-5932-4146-a31e-05b07f1101aa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.099740', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '777205e6-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '1b9e8271716cf249e2872f5ad75188c67b5782f4ca968a6f3923d281ee8e0249'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.099740', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '77721978-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '95ab67a0b65efec6df12b2cfab69e6592a38c9841bda3bd6d86afa903da97c0c'}]}, 'timestamp': '2025-12-02 09:44:16.116587', '_unique_id': '136de8464c364a21930f46e57a74cc64'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.119 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.122 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 446 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8483e625-5372-4f0b-a905-c6ae24fb4171', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 446, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.119534', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '77731ce2-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '41f13b4db67739bff79024b73ff7fa9beb36710a367a3bd374af36cff81877ad'}]}, 'timestamp': '2025-12-02 09:44:16.123252', '_unique_id': 'f9deacf16c6f4e03818a0356957295a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.125 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.125 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '12e6a044-9164-40e2-8171-cd35479972f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.125555', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '77738bd2-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '7d8bc1293356b0abcedf54f7f4a1fa95fd73a9710cfbebfcc666742d7ea7a064'}]}, 'timestamp': '2025-12-02 09:44:16.126086', '_unique_id': '24e8b85da7574942978059f4ff3487d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.128 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.128 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.144 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 52.35546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d007820-5df9-4b35-92a5-83a789a278d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.35546875, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:44:16.128358', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '777672fc-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.363536848, 'message_signature': '5cedcd6a686cd6aed04d1b58d9f0cab48718f09add36beb9bd7f0aa6459156c0'}]}, 'timestamp': '2025-12-02 09:44:16.145100', '_unique_id': '5691d983385048b4842914a313d1d934'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.147 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.147 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.148 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b61d48d2-c097-4769-a703-f623a1a2c7df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.147527', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7776e516-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '5693a51062c0e5e2933c3e755718760a54dc49aac7c9cf5ba468bf2fa225fc58'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.147527', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7776f556-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '81d60ea59dc2894884ef5bdd684d29410d469c0cfdc8efc72117c8eefbae8a96'}]}, 'timestamp': '2025-12-02 09:44:16.148407', '_unique_id': '7ac0fd400b8148b0ac59d525abb443ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.150 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d5c9d89-c900-4cb1-a9e2-fe356ce4f5d8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.150790', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '777c7cc4-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': 'b1f2070c610e5c88959bcaac72825118b352a89f9524e9e5360ed8b7ef15b501'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.150790', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '777c9510-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '298b2b2c26ba56079ea31430e227eef9f033118f4dd3849655c0aec30bab2522'}]}, 'timestamp': '2025-12-02 09:44:16.185467', '_unique_id': '9f2da6c9af96435f97fa84285d5efe38'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9065735d-d82f-44b0-9f54-836a4a04b366', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.187986', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '777d109e-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '37f679005b7ffce19c2d92c7b3e7318565cf1a1205e58ea4099fceadb3623eb0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.187986', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '777d2214-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '2c07d1411c3aecfca502b5eb69dadbf13c9bba6c73833f33f54c2e21df8605ca'}]}, 'timestamp': '2025-12-02 09:44:16.188882', '_unique_id': '73bb51af0fe7459fbbecbcfbf47e692b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.191 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '391ca724-89e8-4e44-ba53-bc836fa55ee7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.191078', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '777d8970-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '00dbe912c159072f74650ee890a4697c84e35f4dd7c7ea8876a54d9e4607f93d'}]}, 'timestamp': '2025-12-02 09:44:16.191554', '_unique_id': '8ac9b5a5068f4f79b904a167956a7b9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 87 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '090b1dc7-c394-4342-affc-95c421cc267e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 87, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.193710', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '777df130-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': 'eb3ba91751cbfa8fec148fd16ffd202768c3268c80a8a0863d3caf7635ff8528'}]}, 'timestamp': '2025-12-02 09:44:16.194212', '_unique_id': '036e09cfd14f4b45984a11bc033e9be8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec6d958f-5711-4d93-977b-11a493839ddf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.196331', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '777e567a-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '9470bcb0d6da37bee3951c2fb467fca90644016f96321d9fa8a70f1bb7cbc579'}]}, 'timestamp': '2025-12-02 09:44:16.196836', '_unique_id': 'bf5afa907e424a77840000c8b69e5c1f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.198 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 73900032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1294fad9-024a-4c67-bc85-37aad4216b0c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73900032, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.198963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '777ebd36-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '533ef7164702552a14ae3b4584c4cb0a14e520d3c58374661683380ad2e09684'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.198963', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '777ecec0-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '7c2da2921eadd5ba986f19ab40f372290755770e847f2bfb3a09f27a1439c7a0'}]}, 'timestamp': '2025-12-02 09:44:16.199885', '_unique_id': 'c062fb1b57b94347bc2d554c0368d425'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.201 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 56560000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '756313cf-2dfc-44ac-b61d-d47a46ecdde6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 56560000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:44:16.202103', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '777f37f2-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.363536848, 'message_signature': '11d3101e4706702e2b67bae10ccca8661f0e2cbb53ad40017b2934b1d7838bf5'}]}, 'timestamp': '2025-12-02 09:44:16.202561', '_unique_id': '2fbcdb89b4324faf94a4854e18c730bb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.204 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5aa65c34-b701-433a-a0cf-0042b56577bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.204845', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '777fa304-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '25821a683b7a2d6ea3e4987a392fcb968576313bf578f50d71daf16650a0cbce'}]}, 'timestamp': '2025-12-02 09:44:16.205317', '_unique_id': '5036fe0fd7434d189ab2830604007ef9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.207 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1433516318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 164656012 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82c8a8db-a895-4fd0-bab0-d61eb18ed9dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1433516318, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.207434', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '778008b2-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '8b76e1af8c41a9a020e5806a1e51f4aa3a901d4d6f1a52ac4cfc652dbde5a0f8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 164656012, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.207434', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '77801974-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '663d6430b311b0a8c9a6b89054cff3b555f356d9d85535951a56b0014fa1575e'}]}, 'timestamp': '2025-12-02 09:44:16.208323', '_unique_id': '30536de629a2486fb971ce5e897d19f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.210 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.210 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '65e46cf7-34a8-4389-85f2-582ce73c2b4d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.210529', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '77808256-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '537ba4267a67e1105bef74dffff1b83ccf2277c063b585633ab3c7c54e40ef9c'}]}, 'timestamp': '2025-12-02 09:44:16.211032', '_unique_id': 'fdf1d771e9dd418ead4eeb660aa09e9f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.213 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b04d2f65-a035-4bdd-baa4-2c3baa01a2b8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.213282', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7780ec32-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '1c0112244d50a24af64cd32857673612fad515461e4e943cfb2e51be644b9f1d'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.213282', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7780fee8-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.31881643, 'message_signature': '6752b1337bb796109f71e6906a4664ba0743471779d98135d5fc4ab067ab8960'}]}, 'timestamp': '2025-12-02 09:44:16.214195', '_unique_id': '5e4877e765cc45cf984774c88cf0a3d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.216 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 286697561 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 39228582 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b5893091-afaa-40f8-ac77-e6829a440ec3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 286697561, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.216554', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '77816dc4-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '61c82db9099a949a3f91414b66420d4c071de222c5688181b85169363c93f435'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 39228582, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.216554', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '77817e18-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': 'b54c5f1023e63078a6a743119b634df92130450d1013b6346453e791ef5d29ed'}]}, 'timestamp': '2025-12-02 09:44:16.217490', '_unique_id': '715ba7c4f4fc4ae1b220499d98587431'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.219 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.219 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dcbff0cd-b5f3-4cca-8c41-2a00fd8f318c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.219829', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '7781ebf0-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': 'fb60635d1936f34389bf3c19ad4fc7deb7e720a5df8ce612ea3743fb02a3b92b'}]}, 'timestamp': '2025-12-02 09:44:16.220287', '_unique_id': '797a3eb0181c4f9c9c8b84aa4a4afaca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.222 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.222 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 9229 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '08701f33-a737-4317-b647-c73aff6a751b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9229, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.222697', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '77825c8e-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '792e743f809909c17bacc015a0627c86a3f412c22d1739451d0a432d1ed0a229'}]}, 'timestamp': '2025-12-02 09:44:16.223172', '_unique_id': 'a12984e5c25a462a9b766df921b3154c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.225 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.225 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2f615a03-7df6-4048-a875-48d5abae1c54', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:44:16.225261', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '7782bff8-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.338659026, 'message_signature': '8ff69b9532e7f25ec78a6ecfaf15e53c823b5e6e963c66e303a7f803bfe385ed'}]}, 'timestamp': '2025-12-02 09:44:16.225798', '_unique_id': '1483f89066014164b63024248f782f14'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.227 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9fd34916-13ad-4d2a-a5e4-b077aac11362', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:44:16.227576', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '778317fa-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': 'dc5f1a0bf6f2878fb088956b109bbfb17deb7e367b22cc476ded38d25476bf2b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:44:16.227576', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '77832204-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10818.369868799, 'message_signature': '5109be235f1cfb44a8da73c49da1725bbe9a4641348d4c8e8ba2bf4f2973682e'}]}, 'timestamp': '2025-12-02 09:44:16.228121', '_unique_id': 'dfe209c04b2c4e4abc739ad66f2b8508'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.228 12 ERROR oslo_messaging.notify.messaging Dec 2 04:44:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:44:16.229 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:44:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:44:18 localhost podman[263518]: 2025-12-02 09:44:18.440896134 +0000 UTC m=+0.084417650 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.) Dec 2 04:44:18 localhost podman[263518]: 2025-12-02 09:44:18.455022686 +0000 UTC m=+0.098544182 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:44:18 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:44:18 localhost nova_compute[230637]: 2025-12-02 09:44:18.561 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19944 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A26BE40000000001030307) Dec 2 04:44:20 localhost nova_compute[230637]: 2025-12-02 09:44:20.966 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:44:21 localhost podman[263538]: 2025-12-02 09:44:21.438630295 +0000 UTC m=+0.077376819 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:44:21 localhost podman[263538]: 2025-12-02 09:44:21.473282211 +0000 UTC m=+0.112028755 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:44:21 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:44:23 localhost nova_compute[230637]: 2025-12-02 09:44:23.565 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:25 localhost nova_compute[230637]: 2025-12-02 09:44:25.969 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:44:28 localhost podman[263561]: 2025-12-02 09:44:28.435487608 +0000 UTC m=+0.072752725 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:28 localhost podman[263561]: 2025-12-02 09:44:28.453992604 +0000 UTC m=+0.091257721 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:44:28 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:44:28 localhost nova_compute[230637]: 2025-12-02 09:44:28.570 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:30 localhost nova_compute[230637]: 2025-12-02 09:44:30.972 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:44:31 localhost podman[263578]: 2025-12-02 09:44:31.454730882 +0000 UTC m=+0.094903140 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:44:31 localhost podman[263578]: 2025-12-02 09:44:31.462927092 +0000 UTC m=+0.103099360 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:44:31 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:44:31 localhost podman[263579]: 2025-12-02 09:44:31.50343467 +0000 UTC m=+0.140901265 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:44:31 localhost podman[263579]: 2025-12-02 09:44:31.541203644 +0000 UTC m=+0.178670179 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:44:31 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:44:31 localhost ovn_controller[154505]: 2025-12-02T09:44:31Z|00047|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 2 04:44:33 localhost nova_compute[230637]: 2025-12-02 09:44:33.573 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51716 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2A50D0000000001030307) Dec 2 04:44:34 localhost openstack_network_exporter[242845]: ERROR 09:44:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:44:34 localhost openstack_network_exporter[242845]: ERROR 09:44:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:34 localhost openstack_network_exporter[242845]: ERROR 09:44:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:44:34 localhost openstack_network_exporter[242845]: ERROR 09:44:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:44:34 localhost openstack_network_exporter[242845]: Dec 2 04:44:34 localhost openstack_network_exporter[242845]: ERROR 09:44:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:44:34 localhost openstack_network_exporter[242845]: Dec 2 04:44:34 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 04:44:34 localhost rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 2 04:44:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51717 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2A9250000000001030307) Dec 2 04:44:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19945 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2ABE40000000001030307) Dec 2 04:44:35 localhost nova_compute[230637]: 2025-12-02 09:44:35.975 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:36 localhost podman[240799]: time="2025-12-02T09:44:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:44:36 localhost podman[240799]: @ - - [02/Dec/2025:09:44:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:44:36 localhost podman[240799]: @ - - [02/Dec/2025:09:44:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17222 "" "Go-http-client/1.1" Dec 2 04:44:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51718 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2B1250000000001030307) Dec 2 04:44:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41773 DF PROTO=TCP SPT=34180 DPT=9102 SEQ=2874539780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2B3E40000000001030307) Dec 2 04:44:38 localhost nova_compute[230637]: 2025-12-02 09:44:38.610 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:41 localhost nova_compute[230637]: 2025-12-02 09:44:41.012 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51719 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2C0E40000000001030307) Dec 2 04:44:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:44:41 localhost podman[263629]: 2025-12-02 09:44:41.445236952 +0000 UTC m=+0.083782221 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:44:41 localhost podman[263629]: 2025-12-02 09:44:41.48501205 +0000 UTC m=+0.123557269 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:44:41 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:44:43 localhost nova_compute[230637]: 2025-12-02 09:44:43.665 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:44:44 localhost podman[263649]: 2025-12-02 09:44:44.44495667 +0000 UTC m=+0.085222541 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 04:44:44 localhost podman[263649]: 2025-12-02 09:44:44.480233927 +0000 UTC m=+0.120499818 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 04:44:44 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:44:46 localhost nova_compute[230637]: 2025-12-02 09:44:46.058 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:48 localhost nova_compute[230637]: 2025-12-02 09:44:48.668 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:44:49 localhost podman[263666]: 2025-12-02 09:44:49.442584602 +0000 UTC m=+0.078261713 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, version=9.6) Dec 2 04:44:49 localhost podman[263666]: 2025-12-02 09:44:49.481143427 +0000 UTC m=+0.116820538 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Dec 2 04:44:49 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:44:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51720 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A2E1E40000000001030307) Dec 2 04:44:51 localhost nova_compute[230637]: 2025-12-02 09:44:51.097 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:44:52 localhost podman[263755]: 2025-12-02 09:44:52.442829864 +0000 UTC m=+0.083690799 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:44:52 localhost podman[263755]: 2025-12-02 09:44:52.451000213 +0000 UTC m=+0.091861118 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:44:52 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:44:53 localhost nova_compute[230637]: 2025-12-02 09:44:53.723 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:56 localhost nova_compute[230637]: 2025-12-02 09:44:56.125 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:58 localhost nova_compute[230637]: 2025-12-02 09:44:58.726 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:44:59 localhost podman[263797]: 2025-12-02 09:44:59.443006389 +0000 UTC m=+0.082792374 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:44:59 localhost podman[263797]: 2025-12-02 09:44:59.45902495 +0000 UTC m=+0.098810945 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:44:59 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:45:01 localhost nova_compute[230637]: 2025-12-02 09:45:01.155 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:45:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:45:02 localhost podman[263818]: 2025-12-02 09:45:02.434830146 +0000 UTC m=+0.077911373 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:45:02 localhost podman[263818]: 2025-12-02 09:45:02.443093338 +0000 UTC m=+0.086174605 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:45:02 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:45:02 localhost systemd[1]: tmp-crun.nFkIha.mount: Deactivated successfully. Dec 2 04:45:02 localhost podman[263819]: 2025-12-02 09:45:02.497843949 +0000 UTC m=+0.138961454 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:45:02 localhost podman[263819]: 2025-12-02 09:45:02.581103495 +0000 UTC m=+0.222220960 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:45:02 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:45:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:45:03.028 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:45:03.028 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:45:03.030 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:03 localhost sshd[263865]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:45:03 localhost systemd-logind[757]: New session 60 of user zuul. Dec 2 04:45:03 localhost systemd[1]: Started Session 60 of User zuul. Dec 2 04:45:03 localhost nova_compute[230637]: 2025-12-02 09:45:03.766 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63019 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A31A3D0000000001030307) Dec 2 04:45:04 localhost openstack_network_exporter[242845]: ERROR 09:45:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:04 localhost openstack_network_exporter[242845]: ERROR 09:45:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:04 localhost openstack_network_exporter[242845]: ERROR 09:45:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:45:04 localhost openstack_network_exporter[242845]: ERROR 09:45:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:45:04 localhost openstack_network_exporter[242845]: Dec 2 04:45:04 localhost openstack_network_exporter[242845]: ERROR 09:45:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:45:04 localhost openstack_network_exporter[242845]: Dec 2 04:45:04 localhost python3.9[263976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:45:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63020 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A31E640000000001030307) Dec 2 04:45:05 localhost python3.9[264089]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:45:05 localhost network[264106]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:45:05 localhost network[264107]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:45:05 localhost network[264108]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:45:05 localhost nova_compute[230637]: 2025-12-02 09:45:05.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:05 localhost nova_compute[230637]: 2025-12-02 09:45:05.722 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:45:05 localhost nova_compute[230637]: 2025-12-02 09:45:05.747 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:45:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51721 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A321E40000000001030307) Dec 2 04:45:06 localhost podman[240799]: time="2025-12-02T09:45:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:45:06 localhost podman[240799]: @ - - [02/Dec/2025:09:45:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:45:06 localhost podman[240799]: @ - - [02/Dec/2025:09:45:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17215 "" "Go-http-client/1.1" Dec 2 04:45:06 localhost nova_compute[230637]: 2025-12-02 09:45:06.192 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63021 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A326650000000001030307) Dec 2 04:45:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19946 DF PROTO=TCP SPT=51420 DPT=9102 SEQ=20700297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A329E40000000001030307) Dec 2 04:45:08 localhost nova_compute[230637]: 2025-12-02 09:45:08.808 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.747 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.779 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.779 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.780 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.780 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:45:09 localhost nova_compute[230637]: 2025-12-02 09:45:09.781 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.246 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.312 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.313 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:45:10 localhost python3.9[264362]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.487 230641 WARNING nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.488 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12125MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.489 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.489 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.857 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.858 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.858 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:45:10 localhost nova_compute[230637]: 2025-12-02 09:45:10.930 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.000 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.001 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.023 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.053 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,HW_CPU_X86_SSE2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI2,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSSE3,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:45:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63022 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A336240000000001030307) Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.091 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.224 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:11 localhost python3.9[264427]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.521 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.529 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.549 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.551 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.551 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.552 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.552 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.752 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.753 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.754 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.755 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:11 localhost nova_compute[230637]: 2025-12-02 09:45:11.845 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:45:12 localhost systemd[1]: tmp-crun.HOULG4.mount: Deactivated successfully. Dec 2 04:45:12 localhost podman[264452]: 2025-12-02 09:45:12.454966701 +0000 UTC m=+0.094162519 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:12 localhost podman[264452]: 2025-12-02 09:45:12.467848858 +0000 UTC m=+0.107044736 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:45:12 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:45:12 localhost nova_compute[230637]: 2025-12-02 09:45:12.793 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:13 localhost nova_compute[230637]: 2025-12-02 09:45:13.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:13 localhost nova_compute[230637]: 2025-12-02 09:45:13.722 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:45:13 localhost nova_compute[230637]: 2025-12-02 09:45:13.722 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:45:13 localhost nova_compute[230637]: 2025-12-02 09:45:13.853 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.184 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.184 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.185 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.185 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.538 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.555 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.556 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.557 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.557 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:14 localhost nova_compute[230637]: 2025-12-02 09:45:14.723 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:45:15 localhost podman[264579]: 2025-12-02 09:45:15.464498904 +0000 UTC m=+0.098030904 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:45:15 localhost podman[264579]: 2025-12-02 09:45:15.500044718 +0000 UTC m=+0.133576668 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Dec 2 04:45:15 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:45:15 localhost python3.9[264590]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:15 localhost nova_compute[230637]: 2025-12-02 09:45:15.718 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:15 localhost nova_compute[230637]: 2025-12-02 09:45:15.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:16 localhost nova_compute[230637]: 2025-12-02 09:45:16.263 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:16 localhost python3.9[264708]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:45:16 localhost nova_compute[230637]: 2025-12-02 09:45:16.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:17 localhost python3.9[264819]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:17 localhost nova_compute[230637]: 2025-12-02 09:45:17.718 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:45:18 localhost python3.9[264931]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:18 localhost nova_compute[230637]: 2025-12-02 09:45:18.854 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63023 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A355E40000000001030307) Dec 2 04:45:19 localhost python3.9[265041]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:45:20 localhost systemd[1]: tmp-crun.lCJdOj.mount: Deactivated successfully. Dec 2 04:45:20 localhost podman[265043]: 2025-12-02 09:45:20.462636578 +0000 UTC m=+0.097050498 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container) Dec 2 04:45:20 localhost podman[265043]: 2025-12-02 09:45:20.501133781 +0000 UTC m=+0.135547701 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41) Dec 2 04:45:20 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:45:21 localhost python3.9[265175]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:21 localhost nova_compute[230637]: 2025-12-02 09:45:21.300 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:45:23 localhost podman[265288]: 2025-12-02 09:45:23.248159843 +0000 UTC m=+0.086131665 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:45:23 localhost podman[265288]: 2025-12-02 09:45:23.255224083 +0000 UTC m=+0.093195915 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:45:23 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:45:23 localhost python3.9[265287]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:45:23 localhost network[265327]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:45:23 localhost network[265328]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:45:23 localhost network[265329]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:45:23 localhost nova_compute[230637]: 2025-12-02 09:45:23.908 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:26 localhost nova_compute[230637]: 2025-12-02 09:45:26.313 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:28 localhost nova_compute[230637]: 2025-12-02 09:45:28.946 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:29 localhost python3.9[265563]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:45:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:45:30 localhost systemd[1]: tmp-crun.coWoKH.mount: Deactivated successfully. Dec 2 04:45:30 localhost podman[265673]: 2025-12-02 09:45:30.110872867 +0000 UTC m=+0.101666762 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125) Dec 2 04:45:30 localhost podman[265673]: 2025-12-02 09:45:30.150128071 +0000 UTC m=+0.140921956 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125) Dec 2 04:45:30 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:45:30 localhost python3.9[265674]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 2 04:45:30 localhost python3.9[265801]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:31 localhost python3.9[265858]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:31 localhost nova_compute[230637]: 2025-12-02 09:45:31.356 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:32 localhost python3.9[265968]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:45:32 localhost podman[266079]: 2025-12-02 09:45:32.661828721 +0000 UTC m=+0.059555630 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:45:32 localhost systemd[1]: tmp-crun.mTSXyd.mount: Deactivated successfully. Dec 2 04:45:32 localhost podman[266080]: 2025-12-02 09:45:32.693968775 +0000 UTC m=+0.084195702 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:45:32 localhost podman[266079]: 2025-12-02 09:45:32.719795338 +0000 UTC m=+0.117522337 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:45:32 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:45:32 localhost podman[266080]: 2025-12-02 09:45:32.77195464 +0000 UTC m=+0.162181577 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:45:32 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:45:32 localhost python3.9[266078]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:33 localhost python3.9[266236]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31399 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A38F6D0000000001030307) Dec 2 04:45:33 localhost nova_compute[230637]: 2025-12-02 09:45:33.993 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:34 localhost openstack_network_exporter[242845]: ERROR 09:45:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:45:34 localhost openstack_network_exporter[242845]: ERROR 09:45:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:34 localhost openstack_network_exporter[242845]: ERROR 09:45:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:45:34 localhost openstack_network_exporter[242845]: ERROR 09:45:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:45:34 localhost openstack_network_exporter[242845]: Dec 2 04:45:34 localhost openstack_network_exporter[242845]: ERROR 09:45:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:45:34 localhost openstack_network_exporter[242845]: Dec 2 04:45:34 localhost python3.9[266348]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31400 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A393640000000001030307) Dec 2 04:45:35 localhost python3.9[266460]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:45:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63024 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A395E40000000001030307) Dec 2 04:45:35 localhost python3.9[266571]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:36 localhost podman[240799]: time="2025-12-02T09:45:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:45:36 localhost podman[240799]: @ - - [02/Dec/2025:09:45:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:45:36 localhost podman[240799]: @ - - [02/Dec/2025:09:45:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17216 "" "Go-http-client/1.1" Dec 2 04:45:36 localhost nova_compute[230637]: 2025-12-02 09:45:36.399 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:36 localhost python3.9[266681]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31401 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A39B640000000001030307) Dec 2 04:45:37 localhost python3.9[266791]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:37 localhost python3.9[266901]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51722 DF PROTO=TCP SPT=35230 DPT=9102 SEQ=1998856449 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A39FE50000000001030307) Dec 2 04:45:38 localhost python3.9[267011]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:38 localhost python3.9[267121]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:45:39 localhost nova_compute[230637]: 2025-12-02 09:45:39.023 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:39 localhost python3.9[267233]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:40 localhost python3.9[267343]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:40 localhost python3.9[267400]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31402 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A3AB240000000001030307) Dec 2 04:45:41 localhost nova_compute[230637]: 2025-12-02 09:45:41.402 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:41 localhost python3.9[267510]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:41 localhost python3.9[267567]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:42 localhost python3.9[267677]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:45:43 localhost podman[267788]: 2025-12-02 09:45:43.142277642 +0000 UTC m=+0.082332683 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm) Dec 2 04:45:43 localhost podman[267788]: 2025-12-02 09:45:43.1530318 +0000 UTC m=+0.093086871 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm) Dec 2 04:45:43 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:45:43 localhost python3.9[267787]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:43 localhost python3.9[267864]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:44 localhost nova_compute[230637]: 2025-12-02 09:45:44.084 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:44 localhost python3.9[267974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:44 localhost python3.9[268031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:45 localhost python3.9[268141]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:45:45 localhost systemd[1]: Reloading. Dec 2 04:45:45 localhost podman[268143]: 2025-12-02 09:45:45.926673937 +0000 UTC m=+0.099529874 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 04:45:45 localhost systemd-sysv-generator[268189]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:45:45 localhost systemd-rc-local-generator[268184]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:45:45 localhost podman[268143]: 2025-12-02 09:45:45.960047113 +0000 UTC m=+0.132903070 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:46 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:45:46 localhost nova_compute[230637]: 2025-12-02 09:45:46.440 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:46 localhost python3.9[268307]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:47 localhost python3.9[268364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:47 localhost python3.9[268474]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:48 localhost python3.9[268531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:49 localhost nova_compute[230637]: 2025-12-02 09:45:49.137 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:49 localhost python3.9[268641]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:45:49 localhost systemd[1]: Reloading. Dec 2 04:45:49 localhost systemd-rc-local-generator[268669]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:45:49 localhost systemd-sysv-generator[268673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31403 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A3CBE40000000001030307) Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:45:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:45:50 localhost systemd[1]: Starting Create netns directory... Dec 2 04:45:50 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 2 04:45:50 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 2 04:45:50 localhost systemd[1]: Finished Create netns directory. Dec 2 04:45:50 localhost podman[268682]: 2025-12-02 09:45:50.744322192 +0000 UTC m=+0.088483967 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Dec 2 04:45:50 localhost podman[268682]: 2025-12-02 09:45:50.757914358 +0000 UTC m=+0.102076123 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:45:50 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:45:51 localhost nova_compute[230637]: 2025-12-02 09:45:51.441 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:51 localhost python3.9[268814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:52 localhost python3.9[268924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:52 localhost python3.9[268981]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:45:53 localhost podman[269139]: 2025-12-02 09:45:53.45172521 +0000 UTC m=+0.086367241 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:45:53 localhost podman[269139]: 2025-12-02 09:45:53.483255747 +0000 UTC m=+0.117897768 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:45:53 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:45:53 localhost python3.9[269140]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:45:54 localhost nova_compute[230637]: 2025-12-02 09:45:54.182 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:54 localhost python3.9[269291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:45:54 localhost python3.9[269348]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.psqpkend recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:55 localhost python3.9[269458]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:45:56 localhost nova_compute[230637]: 2025-12-02 09:45:56.443 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:57 localhost python3.9[269735]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 2 04:45:58 localhost python3.9[269863]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:45:59 localhost nova_compute[230637]: 2025-12-02 09:45:59.223 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:45:59 localhost python3.9[269973]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 2 04:46:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:46:00 localhost systemd[1]: tmp-crun.KZUNbz.mount: Deactivated successfully. Dec 2 04:46:00 localhost podman[270015]: 2025-12-02 09:46:00.467441543 +0000 UTC m=+0.100285404 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:46:00 localhost podman[270015]: 2025-12-02 09:46:00.477135744 +0000 UTC m=+0.109979625 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 04:46:00 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:46:01 localhost nova_compute[230637]: 2025-12-02 09:46:01.446 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:46:03.028 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:46:03.030 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:46:03.031 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:46:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:46:03 localhost systemd[1]: tmp-crun.2Y0RaC.mount: Deactivated successfully. Dec 2 04:46:03 localhost podman[270073]: 2025-12-02 09:46:03.44024645 +0000 UTC m=+0.076103555 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller) Dec 2 04:46:03 localhost podman[270072]: 2025-12-02 09:46:03.505571395 +0000 UTC m=+0.139923129 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:46:03 localhost podman[270072]: 2025-12-02 09:46:03.520025643 +0000 UTC m=+0.154377347 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:46:03 localhost podman[270073]: 2025-12-02 09:46:03.532053096 +0000 UTC m=+0.167910201 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:46:03 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:46:03 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:46:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36160 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4049E0000000001030307) Dec 2 04:46:03 localhost python3[270175]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:46:04 localhost openstack_network_exporter[242845]: ERROR 09:46:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:46:04 localhost openstack_network_exporter[242845]: ERROR 09:46:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:04 localhost openstack_network_exporter[242845]: ERROR 09:46:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:04 localhost openstack_network_exporter[242845]: ERROR 09:46:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:46:04 localhost openstack_network_exporter[242845]: Dec 2 04:46:04 localhost openstack_network_exporter[242845]: ERROR 09:46:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:46:04 localhost openstack_network_exporter[242845]: Dec 2 04:46:04 localhost nova_compute[230637]: 2025-12-02 09:46:04.225 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:04 localhost python3[270175]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",#012 "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:11:02.031267563Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249482216,#012 "VirtualSize": 249482216,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:06.113425253Z",#012 Dec 2 04:46:04 localhost python3.9[270350]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36161 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A408A40000000001030307) Dec 2 04:46:05 localhost python3.9[270462]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31404 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A40BE40000000001030307) Dec 2 04:46:06 localhost podman[240799]: time="2025-12-02T09:46:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:46:06 localhost python3.9[270517]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:06 localhost podman[240799]: @ - - [02/Dec/2025:09:46:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:46:06 localhost podman[240799]: @ - - [02/Dec/2025:09:46:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17228 "" "Go-http-client/1.1" Dec 2 04:46:06 localhost nova_compute[230637]: 2025-12-02 09:46:06.449 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:06 localhost python3.9[270626]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668766.1321316-1365-260924313518560/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36162 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A410A40000000001030307) Dec 2 04:46:07 localhost python3.9[270681]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63025 DF PROTO=TCP SPT=48422 DPT=9102 SEQ=1265174586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A413E40000000001030307) Dec 2 04:46:09 localhost nova_compute[230637]: 2025-12-02 09:46:09.229 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:09 localhost python3.9[270791]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:46:10 localhost python3.9[270901]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.746 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.746 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.746 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.747 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:46:10 localhost nova_compute[230637]: 2025-12-02 09:46:10.747 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:46:10 localhost python3.9[271012]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 2 04:46:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36163 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A420640000000001030307) Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.191 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.255 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.257 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.431 230641 WARNING nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.432 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12140MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.432 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.433 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.450 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.535 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.535 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.536 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:46:11 localhost nova_compute[230637]: 2025-12-02 09:46:11.566 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:46:11 localhost python3.9[271143]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 2 04:46:12 localhost nova_compute[230637]: 2025-12-02 09:46:12.106 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:46:12 localhost nova_compute[230637]: 2025-12-02 09:46:12.113 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:46:12 localhost nova_compute[230637]: 2025-12-02 09:46:12.128 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:46:12 localhost nova_compute[230637]: 2025-12-02 09:46:12.131 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:46:12 localhost nova_compute[230637]: 2025-12-02 09:46:12.131 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:46:12 localhost python3.9[271275]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:46:12 localhost python3.9[271332]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:46:13 localhost systemd[1]: tmp-crun.uk8zLm.mount: Deactivated successfully. Dec 2 04:46:13 localhost podman[271443]: 2025-12-02 09:46:13.382751341 +0000 UTC m=+0.080477712 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 04:46:13 localhost podman[271443]: 2025-12-02 09:46:13.389502153 +0000 UTC m=+0.087228544 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:46:13 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:46:13 localhost python3.9[271442]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:14 localhost nova_compute[230637]: 2025-12-02 09:46:14.266 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:14 localhost python3.9[271571]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.133 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.134 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.134 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.722 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:46:15 localhost nova_compute[230637]: 2025-12-02 09:46:15.723 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.098 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.099 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.113 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.114 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a89f84e-73d0-411c-b89a-7fe08893be7f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.099271', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bef84baa-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': '9af0cdd049b4aab894039072128f62f0e2d278bb316d1c0ffa4202d7b9f70b15'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.099271', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bef85f50-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': 'da13a6892b3a6095e8944e1acc8b371cdba6acd319e502f57a970664da7e1f5d'}]}, 'timestamp': '2025-12-02 09:46:16.114841', '_unique_id': 'dbb03d2929be406db9e3579544390bb7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.117 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.117 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.154 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1433516318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.155 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 164656012 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cca7c219-e1ee-4324-82f7-3c03aaacb7e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1433516318, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.118038', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'befe94ba-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '6baec9d85f340eee836a205dccbf4e75c5b1b5cb39c68fe6485f4adad2b578d5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 164656012, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.118038', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'befeaac2-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': 'e55978fd3b6ff0791eb251210cda871fb1ea4b951277978cfbb6935cc60c0998'}]}, 'timestamp': '2025-12-02 09:46:16.156051', '_unique_id': 'dedbd51a25f74a069799edde39cdc410'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.158 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.163 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e1fa8267-7354-4c28-b0ad-4bc1688f9761', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.159096', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'beffe572-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '38a3b426aeef1aa8bcb8e9828750a9f291a4a73c2b1e0dc13c52bbce6c4cd171'}]}, 'timestamp': '2025-12-02 09:46:16.164145', '_unique_id': 'b1a9f13a8a3f463ea07d5d1e17f3914d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.167 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.167 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43f00cc4-b792-45d3-ab0f-cce287b21020', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.167228', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf007244-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': 'a7285e08d7f8d17ed2481e6f044bfb2306297766ec45b7773e8382ffaff7e2d6'}]}, 'timestamp': '2025-12-02 09:46:16.167785', '_unique_id': 'b2fd0dc3bc9543539cfc6c619112ebb8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.170 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.170 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.170 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0c925dee-eb18-4702-bce6-a3b49a489439', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.170283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf00e918-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': '0cdd0e076a1105570114d15c4e5c921e228b2a9cd38645f582845a64f143db62'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.170283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf00faac-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': '0d72366fb81dc7cc991d3df80f9a15b26b951899cec612090be16affc4b2547e'}]}, 'timestamp': '2025-12-02 09:46:16.171177', '_unique_id': '8441cbe0da644678b51af211d85a018a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.173 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.173 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d125171-54f3-4f82-8065-b1d51b157568', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.173494', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf016a50-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '3bfd6cae092e3d660b5e3613b6f58f22a021936ba9b778a3ae48b8b03dd83fad'}]}, 'timestamp': '2025-12-02 09:46:16.174109', '_unique_id': '17bf2180ca394d35b4e47023a22326c4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.176 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.176 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61999be4-8de6-4ced-bcc6-59b83116ec7c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.176477', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf01dc7e-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': 'ab1a35b1a8bd2f6dff692ac5b3a146630a7c5b2f606244e570e043551c859ec0'}]}, 'timestamp': '2025-12-02 09:46:16.176993', '_unique_id': 'd578022b512c43eb9374810e23c31c25'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.179 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '694dee5c-c059-4744-bfd4-0be443043b65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.179259', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf0248e4-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '0ccd689ae86b4384763ca0ff1c7120611779c953f47bc5853493e5f84e88e46b'}]}, 'timestamp': '2025-12-02 09:46:16.179800', '_unique_id': '7937268aad744c9a8cf01908c0f3ef03'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.182 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af3783e6-9ff5-416a-8f42-639400209a5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.182338', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf02c300-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '124e6914a0ca426a0fde4306cee6320f160a61d4ee1af644612f2cdff0f46da6'}]}, 'timestamp': '2025-12-02 09:46:16.182914', '_unique_id': 'fd3d18ea52374f09896e64354b81e40d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.185 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7277ee5a-f41e-4df7-8d0d-08e75bad0314', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.185193', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf032fb6-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '1cb01a29bfe97ab941e249b885af30b3f80e114507337b1aa2a41166c9e3f323'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.185193', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf03447e-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '6ae6b612c915a94c89b32f65e33dcead58c074793b7f7aaaa82cb884a1d900cb'}]}, 'timestamp': '2025-12-02 09:46:16.186196', '_unique_id': '2dcbd544b3fa4dc1977ad44b4ca50000'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.188 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ae40175-bb13-4eca-ae66-7fad19c6e3b7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.188576', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf03b5a8-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '9b22c1617d1f39de4fc76df8dcac7e80dd0c568de6c811d31a762d50b166c022'}]}, 'timestamp': '2025-12-02 09:46:16.189184', '_unique_id': 'a2eb66c25a464bc190fca6efc0867a6e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.191 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.192 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.192 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b51a201-4e13-4962-b639-9be423dc7bb7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.192013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf043a32-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '3bb3847b5457056d16f9270869004b87b50b5bce6d88bb826bac8b2b136bf7d6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.192013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf044c7a-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': 'f73aeaf5a6b390541ae3e6c659371d9ee21528699089749ba1cef060a501701d'}]}, 'timestamp': '2025-12-02 09:46:16.192949', '_unique_id': 'd97d60f31a83446cab66a79489de3161'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.195 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.195 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.195 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c25e9341-44a8-49c2-b6a3-687e8a6e9825', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.195352', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf04bca0-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': '3833870e022007cf49fd1fcdb46cf08a87e485e813216079fceafb1a577f60b4'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.195352', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf04cee8-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.318343494, 'message_signature': '093fa427caf6deec27a2c5513efa6db0f92378b2678423d4635c89dd299c614d'}]}, 'timestamp': '2025-12-02 09:46:16.196287', '_unique_id': '1c1fb258faa5466fa5409c6dc4989c6b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.198 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.198 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 87 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '80a1acff-5a02-479f-82ea-e8e72b9bfe5b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 87, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.198690', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf053f54-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': 'f4276e0a7762fc091ef6a8f9bdcecbb49a7e1618d00f31b165309ceacba2a862'}]}, 'timestamp': '2025-12-02 09:46:16.199193', '_unique_id': '7b608551bec64c40be1df7d01bb866d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.201 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1c8cd0e-2bb8-4dad-b754-93038f11f825', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.201451', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf05acbe-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': '88e1d7c1c66663385d21967e1702e56d146e98eb0de58adbd2961777151f4354'}]}, 'timestamp': '2025-12-02 09:46:16.202031', '_unique_id': '4a7fb775c75c458db64a099498633fd4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.205 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.227 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.228 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.229 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.229 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.231 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 57840000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f301483e-db00-4f9c-88fe-199e3013336d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 57840000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:46:16.205881', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'bf0a564c-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.45076119, 'message_signature': '76c3c117b3b0961f249bc40e56abb4de2ceead051972a23042c0a7c101acb1fe'}]}, 'timestamp': '2025-12-02 09:46:16.232573', '_unique_id': '7ced1302c25341c7a665520ff1a3e9e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.235 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.235 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 9229 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c204161-71fc-409c-8bd7-f1dc5f912501', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9229, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:46:16.235221', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'bf0acfc8-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.378196052, 'message_signature': 'ac2bd590559422053dc293cc4d3f4ac2ca438bd3fbdff417929b410398137d65'}]}, 'timestamp': '2025-12-02 09:46:16.235579', '_unique_id': '7ff483cab4c84f0bbea3d04581db8a1a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.237 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.237 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 286697561 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.238 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 39228582 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd18122a2-217e-41fd-9ef4-92a007e6bedd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 286697561, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.237587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf0b2cca-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '1305c263a2656256622c364f7edd4876f903f95aeeeea2f08d6cbf1ecb33093f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 39228582, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.237587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf0b3b34-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': 'eb347ca8174340243e17699f0f5ddf30cd2e14822975de9bc4820b31d72f20be'}]}, 'timestamp': '2025-12-02 09:46:16.238306', '_unique_id': 'a3a788cabf0a4f7e881f5f1bcc0bd54b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.240 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.240 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.240 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ad20faac-7c9e-463e-95e4-c9aa3f344f95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.240215', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf0b920a-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '8cac713482d996f439efff7e6c7d45f7d6fc0df7a54c81e75e2e75224856e8f1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.240215', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf0b9dfe-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '060159a64c9bb125e04afa3cfd9906129a48bef582c5c5c280876da640edd675'}]}, 'timestamp': '2025-12-02 09:46:16.240832', '_unique_id': '58e38f5131ed4774944815a654201c99'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.241 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.242 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.242 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 52.35546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4aadf9ab-317d-4004-910a-f104a78bd646', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.35546875, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:46:16.242351', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'bf0be584-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.45076119, 'message_signature': 'dfa86aa0121ca26ad45045baba5bac654a9f07c37e3f09599c2c84cbd8a5dead'}]}, 'timestamp': '2025-12-02 09:46:16.242704', '_unique_id': 'd1857f461de74254b001a2951a4def49'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.243 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.244 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.244 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 73900032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.244 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2ff4aacd-067b-4c10-b341-bb5d83bd96c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73900032, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:46:16.244130', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'bf0c2abc-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': '9c1223cebe164f3e230f99f7d641b9634bcc535c5940b30ef176572e1b2ec9d1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:46:16.244130', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'bf0c35b6-cf63-11f0-a0da-fa163e3f40cc', 'monotonic_time': 10938.337128519, 'message_signature': 'e0480ce6e86e246bde5418b5873bca78d2524ea6d6404b84be66135c10fd2e73'}]}, 'timestamp': '2025-12-02 09:46:16.244737', '_unique_id': 'd3e593f656844adab751edaacd692d8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:46:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:46:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.453 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:16 localhost podman[271574]: 2025-12-02 09:46:16.458714487 +0000 UTC m=+0.089845334 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 04:46:16 localhost podman[271574]: 2025-12-02 09:46:16.491114748 +0000 UTC m=+0.122245585 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:46:16 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.608 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.623 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.623 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.624 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:16 localhost nova_compute[230637]: 2025-12-02 09:46:16.624 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:17 localhost nova_compute[230637]: 2025-12-02 09:46:17.619 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:17 localhost nova_compute[230637]: 2025-12-02 09:46:17.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:18 localhost python3.9[271699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 2 04:46:18 localhost nova_compute[230637]: 2025-12-02 09:46:18.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:46:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36164 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A43FE40000000001030307) Dec 2 04:46:19 localhost nova_compute[230637]: 2025-12-02 09:46:19.303 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:19 localhost python3.9[271813]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:20 localhost python3.9[271923]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:46:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:46:20 localhost systemd[1]: Reloading. Dec 2 04:46:20 localhost systemd-sysv-generator[271964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:46:20 localhost systemd-rc-local-generator[271960]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:46:20 localhost podman[271925]: 2025-12-02 09:46:20.956268805 +0000 UTC m=+0.101053315 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm) Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:20 localhost podman[271925]: 2025-12-02 09:46:20.999102036 +0000 UTC m=+0.143886526 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7) Dec 2 04:46:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:21 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:46:21 localhost nova_compute[230637]: 2025-12-02 09:46:21.456 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:21 localhost python3.9[272086]: ansible-ansible.builtin.service_facts Invoked Dec 2 04:46:21 localhost network[272103]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 2 04:46:21 localhost network[272104]: 'network-scripts' will be removed from distribution in near future. Dec 2 04:46:21 localhost network[272105]: It is advised to switch to 'NetworkManager' instead for network management. Dec 2 04:46:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:46:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:24 localhost podman[272143]: 2025-12-02 09:46:24.079517042 +0000 UTC m=+0.080068162 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:46:24 localhost podman[272143]: 2025-12-02 09:46:24.116025973 +0000 UTC m=+0.116577093 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:46:24 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:46:24 localhost nova_compute[230637]: 2025-12-02 09:46:24.336 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:26 localhost nova_compute[230637]: 2025-12-02 09:46:26.461 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:27 localhost python3.9[272362]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:29 localhost nova_compute[230637]: 2025-12-02 09:46:29.373 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:29 localhost python3.9[272473]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:30 localhost python3.9[272584]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:46:30 localhost podman[272695]: 2025-12-02 09:46:30.882218863 +0000 UTC m=+0.093722018 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2) Dec 2 04:46:30 localhost podman[272695]: 2025-12-02 09:46:30.89513317 +0000 UTC m=+0.106636385 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:46:30 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:46:31 localhost python3.9[272696]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:31 localhost nova_compute[230637]: 2025-12-02 09:46:31.465 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:31 localhost python3.9[272826]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:32 localhost python3.9[272937]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:33 localhost python3.9[273048]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:46:33 localhost podman[273161]: 2025-12-02 09:46:33.8280887 +0000 UTC m=+0.083065981 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 04:46:33 localhost podman[273160]: 2025-12-02 09:46:33.881704293 +0000 UTC m=+0.138707248 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:46:33 localhost podman[273160]: 2025-12-02 09:46:33.888938916 +0000 UTC m=+0.145941851 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:46:33 localhost podman[273161]: 2025-12-02 09:46:33.897062983 +0000 UTC m=+0.152040304 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:46:33 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:46:33 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:46:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46422 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A479CE0000000001030307) Dec 2 04:46:34 localhost python3.9[273159]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:46:34 localhost openstack_network_exporter[242845]: ERROR 09:46:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:46:34 localhost openstack_network_exporter[242845]: ERROR 09:46:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:34 localhost openstack_network_exporter[242845]: ERROR 09:46:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:46:34 localhost openstack_network_exporter[242845]: ERROR 09:46:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:46:34 localhost openstack_network_exporter[242845]: Dec 2 04:46:34 localhost openstack_network_exporter[242845]: ERROR 09:46:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:46:34 localhost openstack_network_exporter[242845]: Dec 2 04:46:34 localhost nova_compute[230637]: 2025-12-02 09:46:34.418 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46423 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A47DE40000000001030307) Dec 2 04:46:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36165 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A47FE40000000001030307) Dec 2 04:46:35 localhost python3.9[273318]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:36 localhost podman[240799]: time="2025-12-02T09:46:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:46:36 localhost podman[240799]: @ - - [02/Dec/2025:09:46:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:46:36 localhost podman[240799]: @ - - [02/Dec/2025:09:46:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17224 "" "Go-http-client/1.1" Dec 2 04:46:36 localhost python3.9[273428]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:36 localhost nova_compute[230637]: 2025-12-02 09:46:36.468 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:36 localhost python3.9[273538]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46424 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A485E40000000001030307) Dec 2 04:46:37 localhost python3.9[273648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31405 DF PROTO=TCP SPT=33290 DPT=9102 SEQ=3631968402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A489E40000000001030307) Dec 2 04:46:38 localhost python3.9[273758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:38 localhost python3.9[273868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:39 localhost python3.9[273978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:39 localhost nova_compute[230637]: 2025-12-02 09:46:39.470 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:39 localhost python3.9[274088]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:40 localhost python3.9[274198]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46425 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A495A40000000001030307) Dec 2 04:46:41 localhost python3.9[274308]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:41 localhost nova_compute[230637]: 2025-12-02 09:46:41.471 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:41 localhost python3.9[274418]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:42 localhost python3.9[274528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:42 localhost python3.9[274638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:43 localhost python3.9[274748]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:46:43 localhost python3.9[274858]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:43 localhost podman[274859]: 2025-12-02 09:46:43.957382657 +0000 UTC m=+0.228419576 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:46:44 localhost podman[274859]: 2025-12-02 09:46:44.040082846 +0000 UTC m=+0.311119775 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:46:44 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:46:44 localhost python3.9[274987]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:46:44 localhost nova_compute[230637]: 2025-12-02 09:46:44.509 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:45 localhost python3.9[275097]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:46 localhost python3.9[275207]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 2 04:46:46 localhost nova_compute[230637]: 2025-12-02 09:46:46.476 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:46:46 localhost podman[275318]: 2025-12-02 09:46:46.976914658 +0000 UTC m=+0.089211884 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:46:46 localhost podman[275318]: 2025-12-02 09:46:46.982773176 +0000 UTC m=+0.095070382 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:46:46 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:46:47 localhost python3.9[275317]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 2 04:46:47 localhost systemd[1]: Reloading. Dec 2 04:46:47 localhost systemd-rc-local-generator[275361]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:46:47 localhost systemd-sysv-generator[275364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:47 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:46:48 localhost python3.9[275481]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:48 localhost python3.9[275592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46426 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4B5E50000000001030307) Dec 2 04:46:49 localhost python3.9[275703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:49 localhost nova_compute[230637]: 2025-12-02 09:46:49.536 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:49 localhost python3.9[275814]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:50 localhost python3.9[275925]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:51 localhost python3.9[276036]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:46:51 localhost podman[276109]: 2025-12-02 09:46:51.428708938 +0000 UTC m=+0.070518266 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, release=1755695350) Dec 2 04:46:51 localhost podman[276109]: 2025-12-02 09:46:51.440924195 +0000 UTC m=+0.082733483 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc.) Dec 2 04:46:51 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:46:51 localhost nova_compute[230637]: 2025-12-02 09:46:51.477 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:51 localhost python3.9[276166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:53 localhost python3.9[276277]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:46:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:46:54 localhost systemd[1]: tmp-crun.waDUHa.mount: Deactivated successfully. Dec 2 04:46:54 localhost podman[276296]: 2025-12-02 09:46:54.433447536 +0000 UTC m=+0.073166046 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:46:54 localhost podman[276296]: 2025-12-02 09:46:54.44667904 +0000 UTC m=+0.086397570 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:46:54 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:46:54 localhost nova_compute[230637]: 2025-12-02 09:46:54.593 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:56 localhost nova_compute[230637]: 2025-12-02 09:46:56.481 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:57 localhost python3.9[276411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:57 localhost python3.9[276521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:58 localhost python3.9[276683]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:59 localhost python3.9[276846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:46:59 localhost nova_compute[230637]: 2025-12-02 09:46:59.630 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:46:59 localhost python3.9[276976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:00 localhost python3.9[277086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:00 localhost python3.9[277196]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:47:01 localhost systemd[1]: tmp-crun.Gj3kKJ.mount: Deactivated successfully. Dec 2 04:47:01 localhost podman[277307]: 2025-12-02 09:47:01.43369143 +0000 UTC m=+0.091433075 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:47:01 localhost podman[277307]: 2025-12-02 09:47:01.443182063 +0000 UTC m=+0.100923708 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 04:47:01 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:47:01 localhost nova_compute[230637]: 2025-12-02 09:47:01.483 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:01 localhost python3.9[277306]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:02 localhost python3.9[277434]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:02 localhost python3.9[277562]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:47:03.031 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:47:03.031 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:47:03.033 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37563 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4EEFD0000000001030307) Dec 2 04:47:04 localhost openstack_network_exporter[242845]: ERROR 09:47:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:04 localhost openstack_network_exporter[242845]: ERROR 09:47:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:04 localhost openstack_network_exporter[242845]: ERROR 09:47:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:47:04 localhost openstack_network_exporter[242845]: ERROR 09:47:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:47:04 localhost openstack_network_exporter[242845]: Dec 2 04:47:04 localhost openstack_network_exporter[242845]: ERROR 09:47:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:47:04 localhost openstack_network_exporter[242845]: Dec 2 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:47:04 localhost podman[277580]: 2025-12-02 09:47:04.441485271 +0000 UTC m=+0.081286973 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:47:04 localhost podman[277580]: 2025-12-02 09:47:04.448038756 +0000 UTC m=+0.087840488 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:47:04 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:47:04 localhost systemd[1]: tmp-crun.bLoULd.mount: Deactivated successfully. Dec 2 04:47:04 localhost podman[277581]: 2025-12-02 09:47:04.497826386 +0000 UTC m=+0.137837934 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:47:04 localhost podman[277581]: 2025-12-02 09:47:04.577962747 +0000 UTC m=+0.217974325 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:47:04 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:47:04 localhost nova_compute[230637]: 2025-12-02 09:47:04.633 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37564 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4F3240000000001030307) Dec 2 04:47:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46427 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4F5E50000000001030307) Dec 2 04:47:06 localhost podman[240799]: time="2025-12-02T09:47:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:47:06 localhost podman[240799]: @ - - [02/Dec/2025:09:47:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:47:06 localhost podman[240799]: @ - - [02/Dec/2025:09:47:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17219 "" "Go-http-client/1.1" Dec 2 04:47:06 localhost nova_compute[230637]: 2025-12-02 09:47:06.489 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37565 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4FB240000000001030307) Dec 2 04:47:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36166 DF PROTO=TCP SPT=37952 DPT=9102 SEQ=513994716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A4FDE40000000001030307) Dec 2 04:47:08 localhost python3.9[277721]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 2 04:47:09 localhost nova_compute[230637]: 2025-12-02 09:47:09.680 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:09 localhost sshd[277740]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:47:09 localhost systemd-logind[757]: New session 61 of user zuul. Dec 2 04:47:09 localhost systemd[1]: Started Session 61 of User zuul. Dec 2 04:47:10 localhost systemd[1]: session-61.scope: Deactivated successfully. Dec 2 04:47:10 localhost systemd-logind[757]: Session 61 logged out. Waiting for processes to exit. Dec 2 04:47:10 localhost systemd-logind[757]: Removed session 61. Dec 2 04:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:47:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4776 writes, 21K keys, 4776 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4776 writes, 569 syncs, 8.39 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:47:10 localhost python3.9[277851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37566 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A50AE40000000001030307) Dec 2 04:47:11 localhost python3.9[277937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668830.3757043-3038-59879810286393/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:11 localhost nova_compute[230637]: 2025-12-02 09:47:11.490 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:12 localhost python3.9[278045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:12 localhost python3.9[278100]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.744 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.744 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.744 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.744 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:47:12 localhost nova_compute[230637]: 2025-12-02 09:47:12.745 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:13 localhost python3.9[278209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.185 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.240 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.240 230641 DEBUG nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.454 230641 WARNING nova.virt.libvirt.driver [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.456 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12130MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.456 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.456 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:13 localhost python3.9[278316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668832.5736573-3038-229553464159819/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.555 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.556 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.556 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:47:13 localhost nova_compute[230637]: 2025-12-02 09:47:13.600 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.059 230641 DEBUG oslo_concurrency.processutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.065 230641 DEBUG nova.compute.provider_tree [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:47:14 localhost python3.9[278444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:47:14 localhost systemd[1]: tmp-crun.I1W1G7.mount: Deactivated successfully. Dec 2 04:47:14 localhost podman[278533]: 2025-12-02 09:47:14.466777966 +0000 UTC m=+0.099493700 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:47:14 localhost podman[278533]: 2025-12-02 09:47:14.478003866 +0000 UTC m=+0.110719650 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:47:14 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:47:14 localhost python3.9[278532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668833.6509578-3038-46550452600796/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=be0176be25a535cff695cce5406adb3d3b53bef4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.624 230641 DEBUG nova.scheduler.client.report [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.626 230641 DEBUG nova.compute.resource_tracker [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.626 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:14 localhost nova_compute[230637]: 2025-12-02 09:47:14.720 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:47:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 5722 writes, 25K keys, 5722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5722 writes, 780 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:47:15 localhost python3.9[278659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:15 localhost python3.9[278745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668834.6595895-3038-101659954237068/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:15 localhost nova_compute[230637]: 2025-12-02 09:47:15.626 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:15 localhost nova_compute[230637]: 2025-12-02 09:47:15.627 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:15 localhost nova_compute[230637]: 2025-12-02 09:47:15.627 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:47:15 localhost nova_compute[230637]: 2025-12-02 09:47:15.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:16 localhost python3.9[278853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:16 localhost nova_compute[230637]: 2025-12-02 09:47:16.493 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:16 localhost nova_compute[230637]: 2025-12-02 09:47:16.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:16 localhost nova_compute[230637]: 2025-12-02 09:47:16.722 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:47:16 localhost nova_compute[230637]: 2025-12-02 09:47:16.723 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:47:16 localhost python3.9[278939]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668835.7094193-3038-89830932834428/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:17 localhost nova_compute[230637]: 2025-12-02 09:47:17.324 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:47:17 localhost nova_compute[230637]: 2025-12-02 09:47:17.325 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:47:17 localhost nova_compute[230637]: 2025-12-02 09:47:17.325 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:47:17 localhost nova_compute[230637]: 2025-12-02 09:47:17.326 230641 DEBUG nova.objects.instance [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:47:17 localhost podman[278962]: 2025-12-02 09:47:17.476264342 +0000 UTC m=+0.091880376 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 04:47:17 localhost podman[278962]: 2025-12-02 09:47:17.48331318 +0000 UTC m=+0.098929184 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 04:47:17 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:47:17 localhost nova_compute[230637]: 2025-12-02 09:47:17.801 230641 DEBUG nova.network.neutron [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:47:17 localhost python3.9[279067]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.027 230641 DEBUG oslo_concurrency.lockutils [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.027 230641 DEBUG nova.compute.manager [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.028 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.029 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:18 localhost python3.9[279177]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.721 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:18 localhost nova_compute[230637]: 2025-12-02 09:47:18.722 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:19 localhost python3.9[279287]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37567 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A52BE40000000001030307) Dec 2 04:47:19 localhost nova_compute[230637]: 2025-12-02 09:47:19.717 230641 DEBUG oslo_service.periodic_task [None req-b1c85115-e3ce-415a-91e1-26304922cec6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:47:19 localhost nova_compute[230637]: 2025-12-02 09:47:19.771 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:19 localhost python3.9[279399]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:20 localhost python3.9[279507]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:21 localhost python3.9[279617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:21 localhost nova_compute[230637]: 2025-12-02 09:47:21.496 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:21 localhost python3.9[279672]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:47:22 localhost podman[279781]: 2025-12-02 09:47:22.444959356 +0000 UTC m=+0.085079245 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public) Dec 2 04:47:22 localhost podman[279781]: 2025-12-02 09:47:22.464105498 +0000 UTC m=+0.104225357 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:47:22 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:47:22 localhost python3.9[279780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 2 04:47:22 localhost python3.9[279855]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 2 04:47:23 localhost python3.9[279965]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 2 04:47:24 localhost nova_compute[230637]: 2025-12-02 09:47:24.811 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:47:24 localhost systemd[1]: tmp-crun.kpveXJ.mount: Deactivated successfully. Dec 2 04:47:24 localhost podman[280076]: 2025-12-02 09:47:24.95895259 +0000 UTC m=+0.103157738 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:47:24 localhost podman[280076]: 2025-12-02 09:47:24.970298873 +0000 UTC m=+0.114503951 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:47:24 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:47:25 localhost python3.9[280075]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:47:26 localhost python3[280208]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:47:26 localhost python3[280208]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:47:26 localhost nova_compute[230637]: 2025-12-02 09:47:26.499 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:27 localhost python3.9[280380]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:28 localhost python3.9[280492]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 2 04:47:29 localhost python3.9[280602]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 2 04:47:29 localhost nova_compute[230637]: 2025-12-02 09:47:29.857 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:30 localhost python3[280712]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 2 04:47:30 localhost python3[280712]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 2 04:47:31 localhost python3.9[280883]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:31 localhost nova_compute[230637]: 2025-12-02 09:47:31.503 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:47:32 localhost systemd[1]: tmp-crun.NSpZzR.mount: Deactivated successfully. Dec 2 04:47:32 localhost podman[280996]: 2025-12-02 09:47:32.056765331 +0000 UTC m=+0.102036737 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:47:32 localhost podman[280996]: 2025-12-02 09:47:32.069940973 +0000 UTC m=+0.115212409 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:47:32 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:47:32 localhost python3.9[280995]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:32 localhost python3.9[281123]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668852.2002988-3716-129887121747905/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:47:33 localhost python3.9[281178]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:47:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55248 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5642E0000000001030307) Dec 2 04:47:34 localhost openstack_network_exporter[242845]: ERROR 09:47:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:47:34 localhost openstack_network_exporter[242845]: ERROR 09:47:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:34 localhost openstack_network_exporter[242845]: ERROR 09:47:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:47:34 localhost openstack_network_exporter[242845]: ERROR 09:47:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:47:34 localhost openstack_network_exporter[242845]: Dec 2 04:47:34 localhost openstack_network_exporter[242845]: ERROR 09:47:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:47:34 localhost openstack_network_exporter[242845]: Dec 2 04:47:34 localhost python3.9[281288]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:34 localhost nova_compute[230637]: 2025-12-02 09:47:34.894 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55249 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A568240000000001030307) Dec 2 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:47:35 localhost podman[281397]: 2025-12-02 09:47:35.441526016 +0000 UTC m=+0.072685463 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:47:35 localhost python3.9[281396]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:35 localhost podman[281397]: 2025-12-02 09:47:35.454072041 +0000 UTC m=+0.085231498 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:47:35 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:47:35 localhost podman[281398]: 2025-12-02 09:47:35.507856088 +0000 UTC m=+0.134579027 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:47:35 localhost podman[281398]: 2025-12-02 09:47:35.545203897 +0000 UTC m=+0.171926846 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 2 04:47:35 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:47:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37568 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A56BE40000000001030307) Dec 2 04:47:36 localhost podman[240799]: time="2025-12-02T09:47:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:47:36 localhost podman[240799]: @ - - [02/Dec/2025:09:47:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:47:36 localhost podman[240799]: @ - - [02/Dec/2025:09:47:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17220 "" "Go-http-client/1.1" Dec 2 04:47:36 localhost python3.9[281549]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 2 04:47:36 localhost nova_compute[230637]: 2025-12-02 09:47:36.506 230641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55250 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A570240000000001030307) Dec 2 04:47:37 localhost python3.9[281659]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:47:37 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 119.8 (399 of 333 items), suggesting rotation. Dec 2 04:47:37 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:47:37 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:47:37 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:47:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46428 DF PROTO=TCP SPT=56484 DPT=9102 SEQ=2936571885 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A573E40000000001030307) Dec 2 04:47:38 localhost python3.9[281792]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 2 04:47:38 localhost systemd[1]: Stopping nova_compute container... Dec 2 04:47:38 localhost journal[203664]: End of file while reading data: Input/output error Dec 2 04:47:38 localhost systemd[1]: libpod-a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f.scope: Deactivated successfully. Dec 2 04:47:38 localhost systemd[1]: libpod-a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f.scope: Consumed 20.354s CPU time. Dec 2 04:47:38 localhost podman[281796]: 2025-12-02 09:47:38.405853834 +0000 UTC m=+0.088948358 container died a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 04:47:38 localhost podman[281796]: 2025-12-02 09:47:38.577173403 +0000 UTC m=+0.260267867 container cleanup a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 04:47:38 localhost podman[281796]: nova_compute Dec 2 04:47:38 localhost podman[281837]: error opening file `/run/crun/a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f/status`: No such file or directory Dec 2 04:47:38 localhost podman[281825]: 2025-12-02 09:47:38.687410538 +0000 UTC m=+0.077528202 container cleanup a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:47:38 localhost podman[281825]: nova_compute Dec 2 04:47:38 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 2 04:47:38 localhost systemd[1]: Stopped nova_compute container. Dec 2 04:47:38 localhost systemd[1]: Starting nova_compute container... Dec 2 04:47:38 localhost systemd[1]: Started libcrun container. Dec 2 04:47:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599d32757aa561883618730f7ad2a353ae4158b524af51217e1c260ed80653f0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:38 localhost podman[281839]: 2025-12-02 09:47:38.850496347 +0000 UTC m=+0.125414652 container init a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 2 04:47:38 localhost podman[281839]: 2025-12-02 09:47:38.860444983 +0000 UTC m=+0.135363288 container start a094c2e6ac4b91ea01dadd1aec120ace7c4876478b7e6f2c2a6086a977536c7f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:47:38 localhost podman[281839]: nova_compute Dec 2 04:47:38 localhost nova_compute[281854]: + sudo -E kolla_set_configs Dec 2 04:47:38 localhost systemd[1]: Started nova_compute container. Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Validating config file Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying service configuration files Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /etc/ceph Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Creating directory /etc/ceph Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/ceph Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Writing out command to execute Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:38 localhost nova_compute[281854]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 2 04:47:38 localhost nova_compute[281854]: ++ cat /run_command Dec 2 04:47:38 localhost nova_compute[281854]: + CMD=nova-compute Dec 2 04:47:38 localhost nova_compute[281854]: + ARGS= Dec 2 04:47:38 localhost nova_compute[281854]: + sudo kolla_copy_cacerts Dec 2 04:47:38 localhost nova_compute[281854]: + [[ ! -n '' ]] Dec 2 04:47:38 localhost nova_compute[281854]: + . kolla_extend_start Dec 2 04:47:38 localhost nova_compute[281854]: Running command: 'nova-compute' Dec 2 04:47:38 localhost nova_compute[281854]: + echo 'Running command: '\''nova-compute'\''' Dec 2 04:47:38 localhost nova_compute[281854]: + umask 0022 Dec 2 04:47:38 localhost nova_compute[281854]: + exec nova-compute Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.673 281858 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.673 281858 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.673 281858 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.673 281858 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.795 281858 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.818 281858 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:40 localhost nova_compute[281854]: 2025-12-02 09:47:40.819 281858 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 2 04:47:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55251 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A57FE50000000001030307) Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.222 281858 INFO nova.virt.driver [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.338 281858 INFO nova.compute.provider_config [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.346 281858 DEBUG oslo_concurrency.lockutils [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.346 281858 DEBUG oslo_concurrency.lockutils [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.346 281858 DEBUG oslo_concurrency.lockutils [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.347 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.347 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.347 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.347 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.347 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.348 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.349 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] console_host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.350 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.351 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.352 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] host = np0005541913.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.353 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.353 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.353 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.353 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.353 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.354 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.355 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.356 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.357 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.358 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.359 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.360 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.361 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.362 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.363 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.364 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.365 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.366 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.367 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.368 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.369 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.370 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.371 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.372 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.373 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.374 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.375 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.376 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.377 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.378 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.378 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.378 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.378 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.378 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.379 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.380 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.381 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.382 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.383 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.384 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.385 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.386 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.387 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.388 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.389 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.390 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.391 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.392 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.393 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.394 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.395 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.396 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.397 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.398 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.399 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.400 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.401 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.402 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.403 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.404 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.405 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.406 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.407 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.408 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.409 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.410 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.411 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.412 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.413 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.414 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.415 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.416 281858 WARNING oslo_config.cfg [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 2 04:47:41 localhost nova_compute[281854]: live_migration_uri is deprecated for removal in favor of two other options that Dec 2 04:47:41 localhost nova_compute[281854]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 2 04:47:41 localhost nova_compute[281854]: and ``live_migration_inbound_addr`` respectively. Dec 2 04:47:41 localhost nova_compute[281854]: ). Its value may be silently ignored in the future.#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.417 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.418 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rbd_secret_uuid = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.419 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.420 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.421 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.422 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.423 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.424 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.425 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.426 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.427 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.428 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.429 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.430 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.431 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.432 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.433 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.434 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.435 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.436 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.437 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.438 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.439 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.440 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.441 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.441 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.441 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.441 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.442 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.442 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.442 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.442 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.442 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.443 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.443 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.443 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.443 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.443 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.444 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.444 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.444 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.444 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.445 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.445 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.445 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.445 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.446 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.446 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.446 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.446 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.446 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.447 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.447 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.447 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.447 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.448 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.448 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.448 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.448 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.449 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.449 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.449 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.449 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.450 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.450 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.450 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.450 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.450 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.451 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.451 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.451 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.451 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.451 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.452 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.452 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.452 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.452 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.453 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.453 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.453 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.453 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.453 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.454 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.454 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.454 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.454 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.455 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.455 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.455 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.455 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.455 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.456 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.456 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.456 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.456 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.456 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.457 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.457 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.457 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.457 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.457 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.458 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.458 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.458 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.458 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.459 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.459 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.459 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.459 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.460 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.460 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.460 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.460 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.461 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.461 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.461 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.461 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.462 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.462 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.462 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.462 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.462 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.463 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.463 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.463 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.463 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.463 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.464 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.464 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.464 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.464 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.464 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.465 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.466 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.467 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.468 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.469 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.470 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.471 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.472 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.473 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.474 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.475 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.476 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.477 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.478 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.479 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.480 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.481 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.482 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.483 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.484 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.485 281858 DEBUG oslo_service.service [None req-bfdcfc7a-e0b9-4ff0-98cd-7e9e403fb20a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.486 281858 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.502 281858 INFO nova.virt.node [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.503 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.503 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.503 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.504 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.515 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.518 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.518 281858 INFO nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Connection event '1' reason 'None'#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.523 281858 INFO nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Libvirt host capabilities Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: f041467c-26d0-44b9-832e-8db5f9b7a49d Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: x86_64 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v4 Dec 2 04:47:41 localhost nova_compute[281854]: AMD Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tcp Dec 2 04:47:41 localhost nova_compute[281854]: rdma Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 16116612 Dec 2 04:47:41 localhost nova_compute[281854]: 4029153 Dec 2 04:47:41 localhost nova_compute[281854]: 0 Dec 2 04:47:41 localhost nova_compute[281854]: 0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: selinux Dec 2 04:47:41 localhost nova_compute[281854]: 0 Dec 2 04:47:41 localhost nova_compute[281854]: system_u:system_r:svirt_t:s0 Dec 2 04:47:41 localhost nova_compute[281854]: system_u:system_r:svirt_tcg_t:s0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: dac Dec 2 04:47:41 localhost nova_compute[281854]: 0 Dec 2 04:47:41 localhost nova_compute[281854]: +107:+107 Dec 2 04:47:41 localhost nova_compute[281854]: +107:+107 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: hvm Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 32 Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-i440fx-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.8.0 Dec 2 04:47:41 localhost nova_compute[281854]: q35 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.4.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.5.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.3.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.4.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.2.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.2.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.0.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.0.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.1.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: hvm Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 64 Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-i440fx-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.8.0 Dec 2 04:47:41 localhost nova_compute[281854]: q35 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.4.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.5.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.3.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.4.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.2.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.2.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.0.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.0.0 Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel8.1.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: #033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.529 281858 DEBUG nova.virt.libvirt.volume.mount [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.535 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.541 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-i440fx-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: i686 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: rom Dec 2 04:47:41 localhost nova_compute[281854]: pflash Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: yes Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: AMD Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 486 Dec 2 04:47:41 localhost nova_compute[281854]: 486-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Conroe Dec 2 04:47:41 localhost nova_compute[281854]: Conroe-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-IBPB Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v4 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v1 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v2 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v6 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v7 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Penryn Dec 2 04:47:41 localhost nova_compute[281854]: Penryn-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Westmere Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v2 Dec 2 04:47:41 localhost nova_compute[281854]: athlon Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: athlon-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: kvm32 Dec 2 04:47:41 localhost nova_compute[281854]: kvm32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: n270 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: n270-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pentium Dec 2 04:47:41 localhost nova_compute[281854]: pentium-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: phenom Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: phenom-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu32 Dec 2 04:47:41 localhost nova_compute[281854]: qemu32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: anonymous Dec 2 04:47:41 localhost nova_compute[281854]: memfd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: disk Dec 2 04:47:41 localhost nova_compute[281854]: cdrom Dec 2 04:47:41 localhost nova_compute[281854]: floppy Dec 2 04:47:41 localhost nova_compute[281854]: lun Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: ide Dec 2 04:47:41 localhost nova_compute[281854]: fdc Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: sata Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: vnc Dec 2 04:47:41 localhost nova_compute[281854]: egl-headless Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: subsystem Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: mandatory Dec 2 04:47:41 localhost nova_compute[281854]: requisite Dec 2 04:47:41 localhost nova_compute[281854]: optional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: pci Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: random Dec 2 04:47:41 localhost nova_compute[281854]: egd Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: path Dec 2 04:47:41 localhost nova_compute[281854]: handle Dec 2 04:47:41 localhost nova_compute[281854]: virtiofs Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tpm-tis Dec 2 04:47:41 localhost nova_compute[281854]: tpm-crb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: emulator Dec 2 04:47:41 localhost nova_compute[281854]: external Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 2.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: passt Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: isa Dec 2 04:47:41 localhost nova_compute[281854]: hyperv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: null Dec 2 04:47:41 localhost nova_compute[281854]: vc Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: dev Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: pipe Dec 2 04:47:41 localhost nova_compute[281854]: stdio Dec 2 04:47:41 localhost nova_compute[281854]: udp Dec 2 04:47:41 localhost nova_compute[281854]: tcp Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: qemu-vdagent Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: relaxed Dec 2 04:47:41 localhost nova_compute[281854]: vapic Dec 2 04:47:41 localhost nova_compute[281854]: spinlocks Dec 2 04:47:41 localhost nova_compute[281854]: vpindex Dec 2 04:47:41 localhost nova_compute[281854]: runtime Dec 2 04:47:41 localhost nova_compute[281854]: synic Dec 2 04:47:41 localhost nova_compute[281854]: stimer Dec 2 04:47:41 localhost nova_compute[281854]: reset Dec 2 04:47:41 localhost nova_compute[281854]: vendor_id Dec 2 04:47:41 localhost nova_compute[281854]: frequencies Dec 2 04:47:41 localhost nova_compute[281854]: reenlightenment Dec 2 04:47:41 localhost nova_compute[281854]: tlbflush Dec 2 04:47:41 localhost nova_compute[281854]: ipi Dec 2 04:47:41 localhost nova_compute[281854]: avic Dec 2 04:47:41 localhost nova_compute[281854]: emsr_bitmap Dec 2 04:47:41 localhost nova_compute[281854]: xmm_input Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 4095 Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Linux KVM Hv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tdx Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.548 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.8.0 Dec 2 04:47:41 localhost nova_compute[281854]: i686 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: rom Dec 2 04:47:41 localhost nova_compute[281854]: pflash Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: yes Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: AMD Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 486 Dec 2 04:47:41 localhost nova_compute[281854]: 486-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Conroe Dec 2 04:47:41 localhost nova_compute[281854]: Conroe-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-IBPB Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v4 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v1 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v2 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v6 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v7 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Penryn Dec 2 04:47:41 localhost nova_compute[281854]: Penryn-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Westmere Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v2 Dec 2 04:47:41 localhost nova_compute[281854]: athlon Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: athlon-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: kvm32 Dec 2 04:47:41 localhost nova_compute[281854]: kvm32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: n270 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: n270-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pentium Dec 2 04:47:41 localhost nova_compute[281854]: pentium-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: phenom Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: phenom-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu32 Dec 2 04:47:41 localhost nova_compute[281854]: qemu32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: anonymous Dec 2 04:47:41 localhost nova_compute[281854]: memfd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: disk Dec 2 04:47:41 localhost nova_compute[281854]: cdrom Dec 2 04:47:41 localhost nova_compute[281854]: floppy Dec 2 04:47:41 localhost nova_compute[281854]: lun Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: fdc Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: sata Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: vnc Dec 2 04:47:41 localhost nova_compute[281854]: egl-headless Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: subsystem Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: mandatory Dec 2 04:47:41 localhost nova_compute[281854]: requisite Dec 2 04:47:41 localhost nova_compute[281854]: optional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: pci Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: random Dec 2 04:47:41 localhost nova_compute[281854]: egd Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: path Dec 2 04:47:41 localhost nova_compute[281854]: handle Dec 2 04:47:41 localhost nova_compute[281854]: virtiofs Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tpm-tis Dec 2 04:47:41 localhost nova_compute[281854]: tpm-crb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: emulator Dec 2 04:47:41 localhost nova_compute[281854]: external Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 2.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: passt Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: isa Dec 2 04:47:41 localhost nova_compute[281854]: hyperv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: null Dec 2 04:47:41 localhost nova_compute[281854]: vc Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: dev Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: pipe Dec 2 04:47:41 localhost nova_compute[281854]: stdio Dec 2 04:47:41 localhost nova_compute[281854]: udp Dec 2 04:47:41 localhost nova_compute[281854]: tcp Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: qemu-vdagent Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: relaxed Dec 2 04:47:41 localhost nova_compute[281854]: vapic Dec 2 04:47:41 localhost nova_compute[281854]: spinlocks Dec 2 04:47:41 localhost nova_compute[281854]: vpindex Dec 2 04:47:41 localhost nova_compute[281854]: runtime Dec 2 04:47:41 localhost nova_compute[281854]: synic Dec 2 04:47:41 localhost nova_compute[281854]: stimer Dec 2 04:47:41 localhost nova_compute[281854]: reset Dec 2 04:47:41 localhost nova_compute[281854]: vendor_id Dec 2 04:47:41 localhost nova_compute[281854]: frequencies Dec 2 04:47:41 localhost nova_compute[281854]: reenlightenment Dec 2 04:47:41 localhost nova_compute[281854]: tlbflush Dec 2 04:47:41 localhost nova_compute[281854]: ipi Dec 2 04:47:41 localhost nova_compute[281854]: avic Dec 2 04:47:41 localhost nova_compute[281854]: emsr_bitmap Dec 2 04:47:41 localhost nova_compute[281854]: xmm_input Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 4095 Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Linux KVM Hv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tdx Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.594 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.599 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-i440fx-rhel7.6.0 Dec 2 04:47:41 localhost nova_compute[281854]: x86_64 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: rom Dec 2 04:47:41 localhost nova_compute[281854]: pflash Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: yes Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: AMD Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 486 Dec 2 04:47:41 localhost nova_compute[281854]: 486-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Conroe Dec 2 04:47:41 localhost nova_compute[281854]: Conroe-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-IBPB Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v4 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v1 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v2 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v6 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v7 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Penryn Dec 2 04:47:41 localhost nova_compute[281854]: Penryn-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Westmere Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v2 Dec 2 04:47:41 localhost nova_compute[281854]: athlon Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: athlon-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: kvm32 Dec 2 04:47:41 localhost nova_compute[281854]: kvm32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: n270 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: n270-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pentium Dec 2 04:47:41 localhost nova_compute[281854]: pentium-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: phenom Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: phenom-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu32 Dec 2 04:47:41 localhost nova_compute[281854]: qemu32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: anonymous Dec 2 04:47:41 localhost nova_compute[281854]: memfd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: disk Dec 2 04:47:41 localhost nova_compute[281854]: cdrom Dec 2 04:47:41 localhost nova_compute[281854]: floppy Dec 2 04:47:41 localhost nova_compute[281854]: lun Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: ide Dec 2 04:47:41 localhost nova_compute[281854]: fdc Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: sata Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: vnc Dec 2 04:47:41 localhost nova_compute[281854]: egl-headless Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: subsystem Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: mandatory Dec 2 04:47:41 localhost nova_compute[281854]: requisite Dec 2 04:47:41 localhost nova_compute[281854]: optional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: pci Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: random Dec 2 04:47:41 localhost nova_compute[281854]: egd Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: path Dec 2 04:47:41 localhost nova_compute[281854]: handle Dec 2 04:47:41 localhost nova_compute[281854]: virtiofs Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tpm-tis Dec 2 04:47:41 localhost nova_compute[281854]: tpm-crb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: emulator Dec 2 04:47:41 localhost nova_compute[281854]: external Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 2.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: passt Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: isa Dec 2 04:47:41 localhost nova_compute[281854]: hyperv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: null Dec 2 04:47:41 localhost nova_compute[281854]: vc Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: dev Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: pipe Dec 2 04:47:41 localhost nova_compute[281854]: stdio Dec 2 04:47:41 localhost nova_compute[281854]: udp Dec 2 04:47:41 localhost nova_compute[281854]: tcp Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: qemu-vdagent Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: relaxed Dec 2 04:47:41 localhost nova_compute[281854]: vapic Dec 2 04:47:41 localhost nova_compute[281854]: spinlocks Dec 2 04:47:41 localhost nova_compute[281854]: vpindex Dec 2 04:47:41 localhost nova_compute[281854]: runtime Dec 2 04:47:41 localhost nova_compute[281854]: synic Dec 2 04:47:41 localhost nova_compute[281854]: stimer Dec 2 04:47:41 localhost nova_compute[281854]: reset Dec 2 04:47:41 localhost nova_compute[281854]: vendor_id Dec 2 04:47:41 localhost nova_compute[281854]: frequencies Dec 2 04:47:41 localhost nova_compute[281854]: reenlightenment Dec 2 04:47:41 localhost nova_compute[281854]: tlbflush Dec 2 04:47:41 localhost nova_compute[281854]: ipi Dec 2 04:47:41 localhost nova_compute[281854]: avic Dec 2 04:47:41 localhost nova_compute[281854]: emsr_bitmap Dec 2 04:47:41 localhost nova_compute[281854]: xmm_input Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 4095 Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Linux KVM Hv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tdx Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.693 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/libexec/qemu-kvm Dec 2 04:47:41 localhost nova_compute[281854]: kvm Dec 2 04:47:41 localhost nova_compute[281854]: pc-q35-rhel9.8.0 Dec 2 04:47:41 localhost nova_compute[281854]: x86_64 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: efi Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 2 04:47:41 localhost nova_compute[281854]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: rom Dec 2 04:47:41 localhost nova_compute[281854]: pflash Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: yes Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: yes Dec 2 04:47:41 localhost nova_compute[281854]: no Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: AMD Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 486 Dec 2 04:47:41 localhost nova_compute[281854]: 486-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Broadwell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cascadelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Conroe Dec 2 04:47:41 localhost nova_compute[281854]: Conroe-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Cooperlake-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Denverton-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dhyana-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Genoa-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-IBPB Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Milan-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-Rome-v4 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v1 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v2 Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: EPYC-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: GraniteRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Haswell-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-noTSX Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v6 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Icelake-Server-v7 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: IvyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: KnightsMill-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Nehalem-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G1-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G4-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Opteron_G5-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Penryn Dec 2 04:47:41 localhost nova_compute[281854]: Penryn-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: SandyBridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SapphireRapids-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: SierraForest-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Client-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-noTSX-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Skylake-Server-v5 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v2 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v3 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Snowridge-v4 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Westmere Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-IBRS Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Westmere-v2 Dec 2 04:47:41 localhost nova_compute[281854]: athlon Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: athlon-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: core2duo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: coreduo-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: kvm32 Dec 2 04:47:41 localhost nova_compute[281854]: kvm32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64 Dec 2 04:47:41 localhost nova_compute[281854]: kvm64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: n270 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: n270-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pentium Dec 2 04:47:41 localhost nova_compute[281854]: pentium-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2 Dec 2 04:47:41 localhost nova_compute[281854]: pentium2-v1 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3 Dec 2 04:47:41 localhost nova_compute[281854]: pentium3-v1 Dec 2 04:47:41 localhost nova_compute[281854]: phenom Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: phenom-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu32 Dec 2 04:47:41 localhost nova_compute[281854]: qemu32-v1 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64 Dec 2 04:47:41 localhost nova_compute[281854]: qemu64-v1 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: anonymous Dec 2 04:47:41 localhost nova_compute[281854]: memfd Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: disk Dec 2 04:47:41 localhost nova_compute[281854]: cdrom Dec 2 04:47:41 localhost nova_compute[281854]: floppy Dec 2 04:47:41 localhost nova_compute[281854]: lun Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: fdc Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: sata Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: vnc Dec 2 04:47:41 localhost nova_compute[281854]: egl-headless Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: subsystem Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: mandatory Dec 2 04:47:41 localhost nova_compute[281854]: requisite Dec 2 04:47:41 localhost nova_compute[281854]: optional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: pci Dec 2 04:47:41 localhost nova_compute[281854]: scsi Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: virtio Dec 2 04:47:41 localhost nova_compute[281854]: virtio-transitional Dec 2 04:47:41 localhost nova_compute[281854]: virtio-non-transitional Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: random Dec 2 04:47:41 localhost nova_compute[281854]: egd Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: path Dec 2 04:47:41 localhost nova_compute[281854]: handle Dec 2 04:47:41 localhost nova_compute[281854]: virtiofs Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tpm-tis Dec 2 04:47:41 localhost nova_compute[281854]: tpm-crb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: emulator Dec 2 04:47:41 localhost nova_compute[281854]: external Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 2.0 Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: usb Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: qemu Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: builtin Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: default Dec 2 04:47:41 localhost nova_compute[281854]: passt Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: isa Dec 2 04:47:41 localhost nova_compute[281854]: hyperv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: null Dec 2 04:47:41 localhost nova_compute[281854]: vc Dec 2 04:47:41 localhost nova_compute[281854]: pty Dec 2 04:47:41 localhost nova_compute[281854]: dev Dec 2 04:47:41 localhost nova_compute[281854]: file Dec 2 04:47:41 localhost nova_compute[281854]: pipe Dec 2 04:47:41 localhost nova_compute[281854]: stdio Dec 2 04:47:41 localhost nova_compute[281854]: udp Dec 2 04:47:41 localhost nova_compute[281854]: tcp Dec 2 04:47:41 localhost nova_compute[281854]: unix Dec 2 04:47:41 localhost nova_compute[281854]: qemu-vdagent Dec 2 04:47:41 localhost nova_compute[281854]: dbus Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: relaxed Dec 2 04:47:41 localhost nova_compute[281854]: vapic Dec 2 04:47:41 localhost nova_compute[281854]: spinlocks Dec 2 04:47:41 localhost nova_compute[281854]: vpindex Dec 2 04:47:41 localhost nova_compute[281854]: runtime Dec 2 04:47:41 localhost nova_compute[281854]: synic Dec 2 04:47:41 localhost nova_compute[281854]: stimer Dec 2 04:47:41 localhost nova_compute[281854]: reset Dec 2 04:47:41 localhost nova_compute[281854]: vendor_id Dec 2 04:47:41 localhost nova_compute[281854]: frequencies Dec 2 04:47:41 localhost nova_compute[281854]: reenlightenment Dec 2 04:47:41 localhost nova_compute[281854]: tlbflush Dec 2 04:47:41 localhost nova_compute[281854]: ipi Dec 2 04:47:41 localhost nova_compute[281854]: avic Dec 2 04:47:41 localhost nova_compute[281854]: emsr_bitmap Dec 2 04:47:41 localhost nova_compute[281854]: xmm_input Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: 4095 Dec 2 04:47:41 localhost nova_compute[281854]: on Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: off Dec 2 04:47:41 localhost nova_compute[281854]: Linux KVM Hv Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: tdx Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: Dec 2 04:47:41 localhost nova_compute[281854]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.757 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.757 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.758 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.758 281858 INFO nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Secure Boot support detected#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.762 281858 INFO nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.762 281858 INFO nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.777 281858 DEBUG nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.804 281858 INFO nova.virt.node [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Determined node identity c79215b2-6762-4f7f-a322-f44db2b0b9bd from /var/lib/nova/compute_id#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.822 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Verified node c79215b2-6762-4f7f-a322-f44db2b0b9bd matches my host np0005541913.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.862 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.868 281858 DEBUG nova.virt.libvirt.vif [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005541913.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-02T08:31:55Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.868 281858 DEBUG nova.network.os_vif_util [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.869 281858 DEBUG nova.network.os_vif_util [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.870 281858 DEBUG os_vif [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.916 281858 DEBUG ovsdbapp.backend.ovs_idl [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.917 281858 DEBUG ovsdbapp.backend.ovs_idl [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.917 281858 DEBUG ovsdbapp.backend.ovs_idl [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.917 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.918 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.918 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.918 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.920 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.921 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.938 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.938 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:47:41 localhost nova_compute[281854]: 2025-12-02 09:47:41.939 281858 INFO oslo.privsep.daemon [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp0bjjh037/privsep.sock']#033[00m Dec 2 04:47:42 localhost nova_compute[281854]: 2025-12-02 09:47:42.709 281858 INFO oslo.privsep.daemon [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 04:47:42 localhost nova_compute[281854]: 2025-12-02 09:47:42.571 281913 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 04:47:42 localhost nova_compute[281854]: 2025-12-02 09:47:42.577 281913 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 04:47:42 localhost nova_compute[281854]: 2025-12-02 09:47:42.580 281913 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 2 04:47:42 localhost nova_compute[281854]: 2025-12-02 09:47:42.581 281913 INFO oslo.privsep.daemon [-] privsep daemon running as pid 281913#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.010 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.010 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a318f6a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.011 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a318f6a-b3, col_values=(('external_ids', {'iface-id': '4a318f6a-b3c1-4690-8246-f7d046ccd64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:b2:03', 'vm-uuid': 'b254bb7f-2891-4b37-9c44-9700e301ce16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.012 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.012 281858 INFO os_vif [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3')#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.013 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.016 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.017 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.180 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.180 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.181 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.181 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.182 281858 DEBUG oslo_concurrency.processutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.667 281858 DEBUG oslo_concurrency.processutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.731 281858 DEBUG nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.732 281858 DEBUG nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:47:43 localhost python3.9[282029]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.966 281858 WARNING nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.968 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12138MB free_disk=41.837242126464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.968 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:47:43 localhost nova_compute[281854]: 2025-12-02 09:47:43.968 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.128 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.129 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.129 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:47:44 localhost systemd[1]: Started libpod-conmon-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope. Dec 2 04:47:44 localhost systemd[1]: Started libcrun container. Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.207 281858 DEBUG nova.scheduler.client.report [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:47:44 localhost podman[282057]: 2025-12-02 09:47:44.217907025 +0000 UTC m=+0.149062345 container init ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3) Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.229 281858 DEBUG nova.scheduler.client.report [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.229 281858 DEBUG nova.compute.provider_tree [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:47:44 localhost podman[282057]: 2025-12-02 09:47:44.235221758 +0000 UTC m=+0.166377078 container start ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3) Dec 2 04:47:44 localhost python3.9[282029]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.245 281858 DEBUG nova.scheduler.client.report [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.273 281858 DEBUG nova.scheduler.client.report [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Applying nova statedir ownership Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16 already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16 to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/b254bb7f-2891-4b37-9c44-9700e301ce16/console.log Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/4ee0f3f792b433d78f415a6f600ca9c7d9f0adb3 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-4ee0f3f792b433d78f415a6f600ca9c7d9f0adb3 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 2 04:47:44 localhost nova_compute_init[282077]: INFO:nova_statedir:Nova statedir ownership complete Dec 2 04:47:44 localhost systemd[1]: libpod-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope: Deactivated successfully. Dec 2 04:47:44 localhost podman[282078]: 2025-12-02 09:47:44.302694361 +0000 UTC m=+0.047451589 container died ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.309 281858 DEBUG oslo_concurrency.processutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:47:44 localhost podman[282090]: 2025-12-02 09:47:44.388872744 +0000 UTC m=+0.071946024 container cleanup ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:47:44 localhost systemd[1]: libpod-conmon-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a.scope: Deactivated successfully. Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.755 281858 DEBUG oslo_concurrency.processutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.764 281858 DEBUG nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 2 04:47:44 localhost nova_compute[281854]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.764 281858 INFO nova.virt.libvirt.host [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.766 281858 DEBUG nova.compute.provider_tree [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.766 281858 DEBUG nova.virt.libvirt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.797 281858 DEBUG nova.scheduler.client.report [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.835 281858 DEBUG nova.compute.resource_tracker [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.835 281858 DEBUG oslo_concurrency.lockutils [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.836 281858 DEBUG nova.service [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.865 281858 DEBUG nova.service [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 2 04:47:44 localhost nova_compute[281854]: 2025-12-02 09:47:44.866 281858 DEBUG nova.servicegroup.drivers.db [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] DB_Driver: join new ServiceGroup member np0005541913.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 2 04:47:45 localhost systemd[1]: session-60.scope: Deactivated successfully. Dec 2 04:47:45 localhost systemd[1]: session-60.scope: Consumed 1min 31.314s CPU time. Dec 2 04:47:45 localhost systemd-logind[757]: Session 60 logged out. Waiting for processes to exit. Dec 2 04:47:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:47:45 localhost systemd-logind[757]: Removed session 60. Dec 2 04:47:45 localhost systemd[1]: tmp-crun.olc6Vo.mount: Deactivated successfully. Dec 2 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-eb1eee259f81f330f1ee0081bc4f7673956ff5103e0f4825b5784a1732364fc7-merged.mount: Deactivated successfully. Dec 2 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ced4ea4dc33686779bdf74a001de2acc50ced170aea28ab3292bc675d820599a-userdata-shm.mount: Deactivated successfully. Dec 2 04:47:45 localhost podman[282156]: 2025-12-02 09:47:45.204895581 +0000 UTC m=+0.102288725 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 04:47:45 localhost podman[282156]: 2025-12-02 09:47:45.216127891 +0000 UTC m=+0.113520985 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:47:45 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:47:46 localhost nova_compute[281854]: 2025-12-02 09:47:46.512 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:46 localhost nova_compute[281854]: 2025-12-02 09:47:46.922 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:47:48 localhost podman[282175]: 2025-12-02 09:47:48.44834888 +0000 UTC m=+0.082970619 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 2 04:47:48 localhost podman[282175]: 2025-12-02 09:47:48.482191014 +0000 UTC m=+0.116812753 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:47:48 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:47:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55252 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A59FE40000000001030307) Dec 2 04:47:51 localhost nova_compute[281854]: 2025-12-02 09:47:51.514 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:51 localhost nova_compute[281854]: 2025-12-02 09:47:51.925 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:47:53 localhost podman[282193]: 2025-12-02 09:47:53.458780507 +0000 UTC m=+0.092481613 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public) Dec 2 04:47:53 localhost podman[282193]: 2025-12-02 09:47:53.500951934 +0000 UTC m=+0.134653040 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6) Dec 2 04:47:53 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:47:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:47:55 localhost systemd[1]: tmp-crun.ueAFtY.mount: Deactivated successfully. Dec 2 04:47:55 localhost podman[282213]: 2025-12-02 09:47:55.455336303 +0000 UTC m=+0.096681455 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:47:55 localhost podman[282213]: 2025-12-02 09:47:55.46344696 +0000 UTC m=+0.104792072 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:47:55 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:47:56 localhost nova_compute[281854]: 2025-12-02 09:47:56.517 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:47:56 localhost nova_compute[281854]: 2025-12-02 09:47:56.927 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:01 localhost nova_compute[281854]: 2025-12-02 09:48:01.521 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:01 localhost nova_compute[281854]: 2025-12-02 09:48:01.929 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:48:02 localhost podman[282243]: 2025-12-02 09:48:02.463812998 +0000 UTC m=+0.097784744 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:48:02 localhost podman[282243]: 2025-12-02 09:48:02.478014917 +0000 UTC m=+0.111986663 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:48:02 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:48:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:03.032 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:03.033 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:03.035 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22740 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5D95E0000000001030307) Dec 2 04:48:04 localhost openstack_network_exporter[242845]: ERROR 09:48:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:04 localhost openstack_network_exporter[242845]: ERROR 09:48:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:04 localhost openstack_network_exporter[242845]: ERROR 09:48:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:48:04 localhost openstack_network_exporter[242845]: ERROR 09:48:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:48:04 localhost openstack_network_exporter[242845]: Dec 2 04:48:04 localhost openstack_network_exporter[242845]: ERROR 09:48:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:48:04 localhost openstack_network_exporter[242845]: Dec 2 04:48:04 localhost podman[282401]: Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.129987984 +0000 UTC m=+0.084174651 container create 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:48:04 localhost systemd[1]: Started libpod-conmon-25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243.scope. Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.098186934 +0000 UTC m=+0.052373631 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:48:04 localhost systemd[1]: Started libcrun container. Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.228527898 +0000 UTC m=+0.182714565 container init 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.241767581 +0000 UTC m=+0.195954248 container start 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1763362218, version=7, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.242133292 +0000 UTC m=+0.196319989 container attach 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, version=7, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:48:04 localhost lucid_goldstine[282415]: 167 167 Dec 2 04:48:04 localhost systemd[1]: libpod-25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243.scope: Deactivated successfully. Dec 2 04:48:04 localhost podman[282401]: 2025-12-02 09:48:04.25220193 +0000 UTC m=+0.206388597 container died 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:48:04 localhost podman[282420]: 2025-12-02 09:48:04.347348293 +0000 UTC m=+0.080146783 container remove 25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_goldstine, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 2 04:48:04 localhost systemd[1]: libpod-conmon-25e48a37bc847410d76883af3a0a8bdc7b07c94cfd8c846b2e92aca008e4d243.scope: Deactivated successfully. Dec 2 04:48:04 localhost podman[282442]: Dec 2 04:48:04 localhost podman[282442]: 2025-12-02 09:48:04.575678575 +0000 UTC m=+0.072689563 container create 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 2 04:48:04 localhost systemd[1]: Started libpod-conmon-4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47.scope. Dec 2 04:48:04 localhost systemd[1]: Started libcrun container. Dec 2 04:48:04 localhost podman[282442]: 2025-12-02 09:48:04.539150029 +0000 UTC m=+0.036160937 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b686dc17f23e0315f34b7cc4f7018493dbaebddc656563916366a73d5a1742ce/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b686dc17f23e0315f34b7cc4f7018493dbaebddc656563916366a73d5a1742ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b686dc17f23e0315f34b7cc4f7018493dbaebddc656563916366a73d5a1742ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b686dc17f23e0315f34b7cc4f7018493dbaebddc656563916366a73d5a1742ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:48:04 localhost podman[282442]: 2025-12-02 09:48:04.648880291 +0000 UTC m=+0.145891199 container init 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:48:04 localhost podman[282442]: 2025-12-02 09:48:04.660279816 +0000 UTC m=+0.157290724 container start 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=1763362218, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:48:04 localhost podman[282442]: 2025-12-02 09:48:04.660510542 +0000 UTC m=+0.157521510 container attach 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, version=7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:48:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22741 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5DD650000000001030307) Dec 2 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-1c72a26b6b6ad18fa93a0fd1675b8dcf633d0b072b12660cc6d8a09e43e9dfea-merged.mount: Deactivated successfully. Dec 2 04:48:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55253 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5DFE40000000001030307) Dec 2 04:48:05 localhost blissful_chebyshev[282457]: [ Dec 2 04:48:05 localhost blissful_chebyshev[282457]: { Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "available": false, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "ceph_device": false, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "lsm_data": {}, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "lvs": [], Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "path": "/dev/sr0", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "rejected_reasons": [ Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "Has a FileSystem", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "Insufficient space (<5GB)" Dec 2 04:48:05 localhost blissful_chebyshev[282457]: ], Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "sys_api": { Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "actuators": null, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "device_nodes": "sr0", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "human_readable_size": "482.00 KB", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "id_bus": "ata", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "model": "QEMU DVD-ROM", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "nr_requests": "2", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "partitions": {}, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "path": "/dev/sr0", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "removable": "1", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "rev": "2.5+", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "ro": "0", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "rotational": "1", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "sas_address": "", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "sas_device_handle": "", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "scheduler_mode": "mq-deadline", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "sectors": 0, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "sectorsize": "2048", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "size": 493568.0, Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "support_discard": "0", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "type": "disk", Dec 2 04:48:05 localhost blissful_chebyshev[282457]: "vendor": "QEMU" Dec 2 04:48:05 localhost blissful_chebyshev[282457]: } Dec 2 04:48:05 localhost blissful_chebyshev[282457]: } Dec 2 04:48:05 localhost blissful_chebyshev[282457]: ] Dec 2 04:48:05 localhost systemd[1]: libpod-4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47.scope: Deactivated successfully. Dec 2 04:48:05 localhost systemd[1]: libpod-4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47.scope: Consumed 1.085s CPU time. Dec 2 04:48:05 localhost podman[282442]: 2025-12-02 09:48:05.706857084 +0000 UTC m=+1.203867992 container died 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z) Dec 2 04:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:48:05 localhost podman[284297]: 2025-12-02 09:48:05.838243155 +0000 UTC m=+0.099058428 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-b686dc17f23e0315f34b7cc4f7018493dbaebddc656563916366a73d5a1742ce-merged.mount: Deactivated successfully. Dec 2 04:48:05 localhost podman[284291]: 2025-12-02 09:48:05.869344256 +0000 UTC m=+0.147752649 container remove 4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_chebyshev, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:48:05 localhost systemd[1]: libpod-conmon-4c6ea5adc25334a5169f7005a9991657a3b03f99c8e523e217a37408b43a7b47.scope: Deactivated successfully. Dec 2 04:48:05 localhost podman[284298]: 2025-12-02 09:48:05.805799509 +0000 UTC m=+0.069354415 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:48:05 localhost podman[284297]: 2025-12-02 09:48:05.923117444 +0000 UTC m=+0.183932667 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:48:05 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:48:05 localhost podman[284298]: 2025-12-02 09:48:05.943297203 +0000 UTC m=+0.206852049 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:48:05 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:48:06 localhost podman[240799]: time="2025-12-02T09:48:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:48:06 localhost podman[240799]: @ - - [02/Dec/2025:09:48:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148825 "" "Go-http-client/1.1" Dec 2 04:48:06 localhost podman[240799]: @ - - [02/Dec/2025:09:48:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17215 "" "Go-http-client/1.1" Dec 2 04:48:06 localhost nova_compute[281854]: 2025-12-02 09:48:06.524 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:06 localhost nova_compute[281854]: 2025-12-02 09:48:06.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22742 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5E5640000000001030307) Dec 2 04:48:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37569 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=1588680873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5E9E50000000001030307) Dec 2 04:48:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:08.318 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:48:08 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:08.320 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 04:48:08 localhost nova_compute[281854]: 2025-12-02 09:48:08.364 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22743 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A5F5240000000001030307) Dec 2 04:48:11 localhost nova_compute[281854]: 2025-12-02 09:48:11.527 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:11 localhost nova_compute[281854]: 2025-12-02 09:48:11.997 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:14 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:14.322 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:48:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:48:15 localhost podman[284367]: 2025-12-02 09:48:15.443138376 +0000 UTC m=+0.086065422 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:48:15 localhost podman[284367]: 2025-12-02 09:48:15.481196783 +0000 UTC m=+0.124123819 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:48:15 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.101 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.101 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.101 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.105 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf7794d1-1fb1-472e-a0fa-db85a6da187b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.101966', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '067da1fa-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': 'a3afb7a36447b56f91819bbb341c4aea7285e8efad95e2704509d49457e82e61'}]}, 'timestamp': '2025-12-02 09:48:16.106354', '_unique_id': 'c134ad952f364f0eb0d2c2cfce35c3b5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.107 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.108 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.145 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.145 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f971dd93-304a-447b-8849-cd486efae039', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.108514', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0683a6c2-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'bb380173e34ef89dab37cc4801ba889ec107d89944f3b5914706ec253627090e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.108514', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0683b4aa-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': '6efeb5763f31f0ca34b8a060186f85c92e7b7ab1b472e00237999cbc4c3f9ad8'}]}, 'timestamp': '2025-12-02 09:48:16.146081', '_unique_id': '6e79939c99d74fe19baea6b37a93e624'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.146 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.147 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.160 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.160 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd518270d-bb9f-4977-b09e-a69dd26637c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.147871', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0685e5fe-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': 'c7ea3f6bdca6866d789b9ed88c0e235ca81bb996f09af537a4396ef13b1f45fd'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.147871', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0685ee96-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': '7208b7ff9601c2cee205e47ba7cced047802a93adeb6ed93856605e5f6d58cfc'}]}, 'timestamp': '2025-12-02 09:48:16.160659', '_unique_id': '30226740ef96457dbdb18dfadbf8f2a1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.161 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 131 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82d38609-915c-4476-a7c0-40d846c97d51', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 131, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.161703', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068620be-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '8e827bc9bb15bbd26e95e573eac53565e2e0de56b994a386d8f2e59fc29eb572'}]}, 'timestamp': '2025-12-02 09:48:16.161932', '_unique_id': '6a85fcb9b4ed474da18d1f54269f8e2f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.162 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 196 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '027e1845-e938-4ce7-bd91-4c5c3ddc8eef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 196, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.162884', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '06864eae-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '78064fd3a992352eac982b2922f929862d81288249d1f1a160794adbb2a8bf2b'}]}, 'timestamp': '2025-12-02 09:48:16.163092', '_unique_id': '7a65e6d1173f4db1b437a88e6940f216'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.163 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2adea395-f14f-4bfa-9108-58ff2b78145f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.164023', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '06867afa-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'f72b8903d73cb4d4f59ea24c77129b79135453a5a630c895097e344d19568c10'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.164023', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '068681f8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'ec24df6c38c59655f7de4ca3ed88394f6bbe3631733dbc7127e9deecf505742d'}]}, 'timestamp': '2025-12-02 09:48:16.164391', '_unique_id': '81800868100547a2b06da2bd1a4d781c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 11468 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63ade8cb-9927-4767-a450-e61bad7ed309', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11468, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.165356', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '0686af2a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '026a8cca430826ffeb78133547e926b15937654058cab852b89a833cc2b3a979'}]}, 'timestamp': '2025-12-02 09:48:16.165561', '_unique_id': '9a69d4279d404dda906c463d00e01e5b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.166 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.166 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.166 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2060b8cb-326f-4002-b9ae-dc6e30af26d7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.166487', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0686db3a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': '7032598a6188ae4f9ca05b388eb980bb6468fdf3a85d53440512f143a97ac802'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.166487', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0686e2ec-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': 'fe4538f5a0c674bf8c876ebcdc29fe25a901e02c79d4bbd6dca65beb94aac193'}]}, 'timestamp': '2025-12-02 09:48:16.166873', '_unique_id': 'a8159618c7a744a7ad2ec9069aff17f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.167 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '254d8390-0c11-4b09-a74c-d1a30f7d6592', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.167817', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '06870f42-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '12624cf682925ac19618af4c349f8b6a13ec4caf758f50cf1ab2244111e5a231'}]}, 'timestamp': '2025-12-02 09:48:16.168021', '_unique_id': 'f10ac44f8ed744b48571838a14adb11b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 73900032 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '455389c2-1a76-4386-b51b-5b8cd574d4c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73900032, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.168955', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '06873cc4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'ab658e0182105320ca732198ecbece5d25f71401677d973e3ac35cb169958b59'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.168955', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0687461a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'c2c4e7726fa2115dd0563f44db4af5a3d34b7cc6cf6fd7fe05b27615200fcd86'}]}, 'timestamp': '2025-12-02 09:48:16.169446', '_unique_id': '3eedf128a6e74343b2b62ffca1b738f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.170 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.170 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.170 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 196 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '91f763e3-6e69-490c-bb08-89d4e306549b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 196, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.170900', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068788f0-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': 'b058f1ba24338b9b1b1e896dda9cbcb706d83b63608896c8e32366c924f6ca24'}]}, 'timestamp': '2025-12-02 09:48:16.171181', '_unique_id': '9f82aef4816d4c159404867ab7f82fcb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.172 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.192 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 52.35546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d6d6c0d-e44e-4d1d-8443-5b654436b33b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.35546875, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:48:16.172437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '068ac66e-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.411057499, 'message_signature': 'ba98e5962066bd60e0920bf45ba5e3c53781aae309d3c5c280b792840a1c64c3'}]}, 'timestamp': '2025-12-02 09:48:16.192419', '_unique_id': '9b8f0ce8920e4446974204909c87aa6b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '310d61ea-cf45-4a08-a439-7fb4173c6110', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.194025', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '068b1060-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': '35654fae5e48963456276ec7388093352e08a6a0042cb9ddae2b0e1183fa46a7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.194025', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '068b19de-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'db380c5b3eb09e516568e9c735818f03b923c44408c6ab82b4814a1657351cf8'}]}, 'timestamp': '2025-12-02 09:48:16.194529', '_unique_id': 'cba6f88da0da4b93a737976f52dfe744'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 59090000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9deab18e-0c55-435d-85ba-754accf74b71', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 59090000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:48:16.196142', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '068b62e0-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.411057499, 'message_signature': '688952417cae1626d599c639e5d717d290001a75b452334499e6020caffcec7b'}]}, 'timestamp': '2025-12-02 09:48:16.196403', '_unique_id': 'ec760b6241be4b81bc1ff74e41d3b06e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 9425 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5418429-a254-4783-943a-47cb0be53f29', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9425, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.197840', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068ba598-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '58b83a4170b81a3e8e74367d60eef6513f753066179fe5fed3a2295084461ca5'}]}, 'timestamp': '2025-12-02 09:48:16.198125', '_unique_id': '276736bfee8f4d069e832fd30983a755'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '64c4b715-aa19-4f6a-9fb9-7811583cf400', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.199365', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068be0da-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '129b3c9a031226f0ef25d52853bde03f6afc60cb391b34f838ebac37aa12a030'}]}, 'timestamp': '2025-12-02 09:48:16.199655', '_unique_id': 'ae78b229860544278397d127328e7b4f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3517292e-f157-41f6-a3b8-fb2e6af8aa50', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.200927', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068c1dde-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '42d5e9d2764b4074b22a23fe021d8324c59a0665bf6fadd00f9b449ed312dae0'}]}, 'timestamp': '2025-12-02 09:48:16.201202', '_unique_id': 'f35af534e87140e4a076ce32ccd0c103'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.202 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1433516318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 164656012 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0dd16b95-9d1c-48fb-9ecc-199c8f7fa029', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1433516318, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.202444', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '068c58f8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': '80c3ea3a870043e5aef8514fa8770a3d7e67b7a8f7a50783b9fbcf909b33aff7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 164656012, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.202444', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '068c6348-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': '7f0fe3b965ceab1b928e4f6ea0d9022d232d9280cc797860f82a25aa8d024638'}]}, 'timestamp': '2025-12-02 09:48:16.202969', '_unique_id': 'c95b82a958a745e4a807df071bd7e3de'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.204 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 89 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '375d7814-f891-41c1-bdfd-7cfd695b75f4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 89, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:48:16.204385', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '068ca510-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.321025073, 'message_signature': '486a65b11808b928f88783bf877244308df59dee43b7f5a72264faaea2d92cc6'}]}, 'timestamp': '2025-12-02 09:48:16.204677', '_unique_id': 'c37f7d706c7349fcb08e2814a536d5ed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c32735f-2056-4730-9263-26c68babfa76', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.206156', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '068ce9ee-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': 'cbd6e592e2b225a5cc7cf0f1aec8c17170789dca2930754d8ed058e69a0e40ef'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.206156', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '068cf3bc-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.366935341, 'message_signature': '3272e9a018488cc05d202a63b6ee4e9912023af197dc29a8fcdd18f0a07ae522'}]}, 'timestamp': '2025-12-02 09:48:16.206685', '_unique_id': 'd80de2a1ab7b47f2b3927a467f48b631'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 286697561 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 39228582 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e2e05042-d184-43b9-81d5-e87d0f322ee8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 286697561, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:48:16.207934', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '068d2f58-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'c3b30f94d5c1b0c0d587feb341a76153e005454ab78c1101e65adba2051bbbd1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 39228582, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:48:16.207934', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '068d38ae-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11058.327607669, 'message_signature': 'd502c616b6a2d42484a87e2d07bcf3d2b6c08fde50d37fc887231e73c63ccf5f'}]}, 'timestamp': '2025-12-02 09:48:16.208422', '_unique_id': '20ca740e4fd64c53be59527ba7d62cdd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:48:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:48:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:48:16 localhost nova_compute[281854]: 2025-12-02 09:48:16.528 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.034 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.869 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.895 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.896 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.897 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.897 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:17 localhost nova_compute[281854]: 2025-12-02 09:48:17.921 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:48:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22744 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A615E40000000001030307) Dec 2 04:48:19 localhost podman[284387]: 2025-12-02 09:48:19.454828075 +0000 UTC m=+0.084950991 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:48:19 localhost podman[284387]: 2025-12-02 09:48:19.484699724 +0000 UTC m=+0.114822600 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:48:19 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:48:21 localhost nova_compute[281854]: 2025-12-02 09:48:21.531 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:22 localhost nova_compute[281854]: 2025-12-02 09:48:22.037 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:23 localhost nova_compute[281854]: 2025-12-02 09:48:23.123 281858 DEBUG nova.compute.manager [None req-39f4a9bf-0492-4be9-985b-94a3d1c1b88a cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:23 localhost nova_compute[281854]: 2025-12-02 09:48:23.128 281858 INFO nova.compute.manager [None req-39f4a9bf-0492-4be9-985b-94a3d1c1b88a cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Retrieving diagnostics#033[00m Dec 2 04:48:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:48:24 localhost podman[284406]: 2025-12-02 09:48:24.441719392 +0000 UTC m=+0.081523309 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41) Dec 2 04:48:24 localhost podman[284406]: 2025-12-02 09:48:24.452734697 +0000 UTC m=+0.092538624 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:48:24 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:48:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:48:26 localhost podman[284428]: 2025-12-02 09:48:26.435987306 +0000 UTC m=+0.072656793 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:48:26 localhost podman[284428]: 2025-12-02 09:48:26.443438005 +0000 UTC m=+0.080107522 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:48:26 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:48:26 localhost nova_compute[281854]: 2025-12-02 09:48:26.535 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:27 localhost nova_compute[281854]: 2025-12-02 09:48:27.084 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.681 281858 DEBUG oslo_concurrency.lockutils [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.682 281858 DEBUG oslo_concurrency.lockutils [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.683 281858 DEBUG nova.compute.manager [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.687 281858 DEBUG nova.compute.manager [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.692 281858 DEBUG nova.objects.instance [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'flavor' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:28 localhost nova_compute[281854]: 2025-12-02 09:48:28.732 281858 DEBUG nova.virt.libvirt.driver [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Dec 2 04:48:31 localhost nova_compute[281854]: 2025-12-02 09:48:31.544 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:31 localhost nova_compute[281854]: 2025-12-02 09:48:31.752 281858 INFO nova.virt.libvirt.driver [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Instance shutdown successfully after 3 seconds.#033[00m Dec 2 04:48:32 localhost kernel: device tap4a318f6a-b3 left promiscuous mode Dec 2 04:48:32 localhost NetworkManager[5965]: [1764668912.0417] device (tap4a318f6a-b3): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00048|binding|INFO|Releasing lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00049|binding|INFO|Setting lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a down in Southbound Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00050|binding|INFO|Removing iface tap4a318f6a-b3 ovn-installed in OVS Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.048 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.052 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.058 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:03 192.168.0.102'], port_security=['fa:16:3e:26:b2:03 192.168.0.102'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.102/24', 'neutron:device_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005541913.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1725f11b-f73c-4c4f-b3d3-772d68fcc09e 23293c48-39ca-43a0-a462-ebc8626a7f6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a318f6a-b3c1-4690-8246-f7d046ccd64a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.061 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4a318f6a-b3c1-4690-8246-f7d046ccd64a in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 unbound from our chassis#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00051|ovn_bfd|INFO|Disabled BFD on interface ovn-be95dc-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00052|ovn_bfd|INFO|Disabled BFD on interface ovn-2587fe-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00053|ovn_bfd|INFO|Disabled BFD on interface ovn-4d166c-0 Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.061 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00054|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.063 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.065 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 595e1c9b-709c-41d2-9212-0b18b13291a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.068 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.074 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[efbcb7d0-e20d-48fe-b61d-8bddbcf37534]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.077 160221 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 namespace which is not needed anymore#033[00m Dec 2 04:48:32 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully. Dec 2 04:48:32 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 4min 7.353s CPU time. Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.086 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.089 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost systemd-machined[84262]: Machine qemu-1-instance-00000002 terminated. Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00055|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.099 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost kernel: device tap4a318f6a-b3 entered promiscuous mode Dec 2 04:48:32 localhost NetworkManager[5965]: [1764668912.1682] manager: (tap4a318f6a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/15) Dec 2 04:48:32 localhost kernel: device tap4a318f6a-b3 left promiscuous mode Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00056|binding|INFO|Claiming lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a for this chassis. Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00057|binding|INFO|4a318f6a-b3c1-4690-8246-f7d046ccd64a: Claiming fa:16:3e:26:b2:03 192.168.0.102 Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.170 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.177 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.184 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:03 192.168.0.102'], port_security=['fa:16:3e:26:b2:03 192.168.0.102'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.102/24', 'neutron:device_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005541913.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1725f11b-f73c-4c4f-b3d3-772d68fcc09e 23293c48-39ca-43a0-a462-ebc8626a7f6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a318f6a-b3c1-4690-8246-f7d046ccd64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00058|ovn_bfd|INFO|Enabled BFD on interface ovn-be95dc-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00059|ovn_bfd|INFO|Enabled BFD on interface ovn-2587fe-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00060|ovn_bfd|INFO|Enabled BFD on interface ovn-4d166c-0 Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.188 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.197 281858 INFO nova.virt.libvirt.driver [-] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Instance destroyed successfully.#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.199 281858 DEBUG nova.objects.instance [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'numa_topology' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.201 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.209 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.214 281858 DEBUG nova.compute.manager [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00061|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00062|binding|INFO|Releasing lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.219 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:32.229 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:03 192.168.0.102'], port_security=['fa:16:3e:26:b2:03 192.168.0.102'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.102/24', 'neutron:device_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005541913.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1725f11b-f73c-4c4f-b3d3-772d68fcc09e 23293c48-39ca-43a0-a462-ebc8626a7f6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a318f6a-b3c1-4690-8246-f7d046ccd64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.236 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00063|ovn_bfd|INFO|Disabled BFD on interface ovn-be95dc-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00064|ovn_bfd|INFO|Disabled BFD on interface ovn-2587fe-0 Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00065|ovn_bfd|INFO|Disabled BFD on interface ovn-4d166c-0 Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.240 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:32 localhost ovn_controller[154505]: 2025-12-02T09:48:32Z|00066|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.297 281858 DEBUG oslo_concurrency.lockutils [None req-e3730b24-38b8-4992-a4b0-220655a1395e cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: held 3.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.481 281858 DEBUG nova.compute.manager [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received event network-vif-unplugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.482 281858 DEBUG oslo_concurrency.lockutils [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.483 281858 DEBUG oslo_concurrency.lockutils [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.484 281858 DEBUG oslo_concurrency.lockutils [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.484 281858 DEBUG nova.compute.manager [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] No waiting events found dispatching network-vif-unplugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 04:48:32 localhost nova_compute[281854]: 2025-12-02 09:48:32.485 281858 WARNING nova.compute.manager [req-4d6c6eb6-ae7a-4fc2-93af-2b34924fdc51 req-fbe73d02-414c-480b-8f2f-545135c3c23f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received unexpected event network-vif-unplugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a for instance with vm_state stopped and task_state None.#033[00m Dec 2 04:48:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:48:33 localhost podman[284505]: 2025-12-02 09:48:33.450773028 +0000 UTC m=+0.089027091 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Dec 2 04:48:33 localhost podman[284505]: 2025-12-02 09:48:33.464990887 +0000 UTC m=+0.103244940 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:48:33 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:48:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30372 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A64E8D0000000001030307) Dec 2 04:48:34 localhost openstack_network_exporter[242845]: ERROR 09:48:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:34 localhost openstack_network_exporter[242845]: ERROR 09:48:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:48:34 localhost openstack_network_exporter[242845]: ERROR 09:48:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:48:34 localhost openstack_network_exporter[242845]: ERROR 09:48:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:48:34 localhost openstack_network_exporter[242845]: Dec 2 04:48:34 localhost openstack_network_exporter[242845]: ERROR 09:48:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:48:34 localhost openstack_network_exporter[242845]: Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.511 281858 DEBUG nova.compute.manager [None req-e93c46bf-57e3-49d5-992e-2d67c7c6f1a5 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.527 281858 DEBUG nova.compute.manager [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.528 281858 DEBUG oslo_concurrency.lockutils [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.528 281858 DEBUG oslo_concurrency.lockutils [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.529 281858 DEBUG oslo_concurrency.lockutils [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.530 281858 DEBUG nova.compute.manager [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] No waiting events found dispatching network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.530 281858 WARNING nova.compute.manager [req-10a7b066-1348-4a94-8649-94365cb4398f req-8185f3e7-6ea6-4b45-aaab-316a66806064 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received unexpected event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a for instance with vm_state stopped and task_state None.#033[00m Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server [None req-e93c46bf-57e3-49d5-992e-2d67c7c6f1a5 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance b254bb7f-2891-4b37-9c44-9700e301ce16 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server raise self.value Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server raise self.value Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance b254bb7f-2891-4b37-9c44-9700e301ce16 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 2 04:48:34 localhost nova_compute[281854]: 2025-12-02 09:48:34.564 281858 ERROR oslo_messaging.rpc.server #033[00m Dec 2 04:48:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30373 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A652A40000000001030307) Dec 2 04:48:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22745 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A655E40000000001030307) Dec 2 04:48:36 localhost podman[240799]: time="2025-12-02T09:48:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:48:36 localhost podman[240799]: @ - - [02/Dec/2025:09:48:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148826 "" "Go-http-client/1.1" Dec 2 04:48:36 localhost podman[240799]: @ - - [02/Dec/2025:09:48:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17104 "" "Go-http-client/1.1" Dec 2 04:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:48:36 localhost podman[284526]: 2025-12-02 09:48:36.455344121 +0000 UTC m=+0.090093949 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 04:48:36 localhost podman[284526]: 2025-12-02 09:48:36.495021761 +0000 UTC m=+0.129771629 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible) Dec 2 04:48:36 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:48:36 localhost podman[284525]: 2025-12-02 09:48:36.505792739 +0000 UTC m=+0.141992896 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:48:36 localhost podman[284525]: 2025-12-02 09:48:36.542091209 +0000 UTC m=+0.178291296 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:48:36 localhost nova_compute[281854]: 2025-12-02 09:48:36.545 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:36 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:48:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30374 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A65AA50000000001030307) Dec 2 04:48:37 localhost nova_compute[281854]: 2025-12-02 09:48:37.088 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55254 DF PROTO=TCP SPT=40112 DPT=9102 SEQ=3432569286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A65DE40000000001030307) Dec 2 04:48:40 localhost nova_compute[281854]: 2025-12-02 09:48:40.877 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:40 localhost nova_compute[281854]: 2025-12-02 09:48:40.877 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:40 localhost nova_compute[281854]: 2025-12-02 09:48:40.878 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:48:40 localhost nova_compute[281854]: 2025-12-02 09:48:40.878 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:48:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30375 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A66A640000000001030307) Dec 2 04:48:41 localhost nova_compute[281854]: 2025-12-02 09:48:41.579 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.090 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.327 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.327 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.328 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.328 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:42 localhost podman[284488]: 2025-12-02 09:48:42.365811307 +0000 UTC m=+10.145274427 container stop 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:48:42 localhost systemd[1]: libpod-7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0.scope: Deactivated successfully. Dec 2 04:48:42 localhost podman[284488]: 2025-12-02 09:48:42.374024384 +0000 UTC m=+10.153487494 container died 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, release=1761123044, batch=17.1_20251118.1) Dec 2 04:48:42 localhost podman[284488]: 2025-12-02 09:48:42.563589973 +0000 UTC m=+10.343053063 container cleanup 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64) Dec 2 04:48:42 localhost podman[284574]: 2025-12-02 09:48:42.5812966 +0000 UTC m=+0.197018436 container cleanup 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 2 04:48:42 localhost systemd[1]: libpod-conmon-7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0.scope: Deactivated successfully. Dec 2 04:48:42 localhost podman[284590]: 2025-12-02 09:48:42.663338574 +0000 UTC m=+0.076171871 container remove 7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.668 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[639379d6-f901-43fa-9586-c26625e173a1]: (4, ('Tue Dec 2 09:48:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 (7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0)\n7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0\nTue Dec 2 09:48:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 (7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0)\n7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0\n', 'time="2025-12-02T09:48:42Z" level=warning msg="StopSignal SIGTERM failed to stop container neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 in 10 seconds, resorting to SIGKILL"\n', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.671 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[34ee909e-09fb-437b-9138-eba6356640a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.672 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap595e1c9b-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:42 localhost kernel: device tap595e1c9b-70 left promiscuous mode Dec 2 04:48:42 localhost nova_compute[281854]: 2025-12-02 09:48:42.732 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.735 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f88c8799-0820-4a5b-abf4-90c06fbb73a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.753 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[eba3154b-9c17-49bb-9a14-d12b8833a0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.754 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7f51dc-9761-4ec2-ab28-794b475bdb89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.770 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0c15aafb-1c13-4c9c-b6b3-addea7869f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647713, 'reachable_time': 41307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284614, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.789 160371 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.791 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3f1b00-ba00-4181-8cc3-351d508cfa5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.793 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4a318f6a-b3c1-4690-8246-f7d046ccd64a in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 unbound from our chassis#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.796 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 595e1c9b-709c-41d2-9212-0b18b13291a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.797 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[726a56ec-b375-4e1b-abf5-7de95fa1f034]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.799 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4a318f6a-b3c1-4690-8246-f7d046ccd64a in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 unbound from our chassis#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.801 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 595e1c9b-709c-41d2-9212-0b18b13291a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 04:48:42 localhost ovn_metadata_agent[160216]: 2025-12-02 09:48:42.802 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5dae17ac-18fc-4de3-96de-27fc7b3b009c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:48:43 localhost systemd[1]: var-lib-containers-storage-overlay-9f61ced3f88a0be87d665800e8e7cb17559a616ee2c3a746c87a603ddb5549d7-merged.mount: Deactivated successfully. Dec 2 04:48:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e98e9f24e52d3758fb7e26858bb0e13707794227fcf30ccf3a4aafe11bccfd0-userdata-shm.mount: Deactivated successfully. Dec 2 04:48:43 localhost systemd[1]: run-netns-ovnmeta\x2d595e1c9b\x2d709c\x2d41d2\x2d9212\x2d0b18b13291a8.mount: Deactivated successfully. Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.446 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.472 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.473 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.474 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.475 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.475 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.476 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.477 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.477 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.478 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.478 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.498 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.498 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.499 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.499 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.500 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:48:43 localhost nova_compute[281854]: 2025-12-02 09:48:43.954 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.473 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.474 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.649 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.651 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12579MB free_disk=41.83708190917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.651 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.652 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.839 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.840 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.840 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:48:44 localhost nova_compute[281854]: 2025-12-02 09:48:44.879 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:48:45 localhost nova_compute[281854]: 2025-12-02 09:48:45.328 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:48:45 localhost nova_compute[281854]: 2025-12-02 09:48:45.335 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:48:45 localhost nova_compute[281854]: 2025-12-02 09:48:45.554 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:48:45 localhost nova_compute[281854]: 2025-12-02 09:48:45.576 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:48:45 localhost nova_compute[281854]: 2025-12-02 09:48:45.577 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:48:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:48:46 localhost podman[284660]: 2025-12-02 09:48:46.438718408 +0000 UTC m=+0.079322032 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:48:46 localhost podman[284660]: 2025-12-02 09:48:46.452780869 +0000 UTC m=+0.093384463 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 2 04:48:46 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:48:46 localhost nova_compute[281854]: 2025-12-02 09:48:46.610 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:47 localhost nova_compute[281854]: 2025-12-02 09:48:47.092 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:47 localhost nova_compute[281854]: 2025-12-02 09:48:47.192 281858 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 04:48:47 localhost nova_compute[281854]: 2025-12-02 09:48:47.192 281858 INFO nova.compute.manager [-] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] VM Stopped (Lifecycle Event)#033[00m Dec 2 04:48:47 localhost nova_compute[281854]: 2025-12-02 09:48:47.211 281858 DEBUG nova.compute.manager [None req-cd504abc-ed71-4db4-ab92-243250a2a677 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:47 localhost nova_compute[281854]: 2025-12-02 09:48:47.214 281858 DEBUG nova.compute.manager [None req-cd504abc-ed71-4db4-ab92-243250a2a677 - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 04:48:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30376 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A689E40000000001030307) Dec 2 04:48:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:48:50 localhost podman[284677]: 2025-12-02 09:48:50.451366079 +0000 UTC m=+0.088358561 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:48:50 localhost podman[284677]: 2025-12-02 09:48:50.484912103 +0000 UTC m=+0.121904535 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:48:50 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:48:51 localhost nova_compute[281854]: 2025-12-02 09:48:51.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.093 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.555 281858 DEBUG nova.compute.manager [None req-e509d44f-e02c-4355-b6e6-d768bc766666 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server [None req-e509d44f-e02c-4355-b6e6-d768bc766666 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance b254bb7f-2891-4b37-9c44-9700e301ce16 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server raise self.value Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server raise self.value Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance b254bb7f-2891-4b37-9c44-9700e301ce16 in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 2 04:48:52 localhost nova_compute[281854]: 2025-12-02 09:48:52.575 281858 ERROR oslo_messaging.rpc.server #033[00m Dec 2 04:48:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:48:55 localhost podman[284695]: 2025-12-02 09:48:55.433525403 +0000 UTC m=+0.078066349 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:48:55 localhost podman[284695]: 2025-12-02 09:48:55.451271161 +0000 UTC m=+0.095812057 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:48:55 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:48:56 localhost nova_compute[281854]: 2025-12-02 09:48:56.615 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:57 localhost nova_compute[281854]: 2025-12-02 09:48:57.131 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:48:57 localhost podman[284715]: 2025-12-02 09:48:57.44292931 +0000 UTC m=+0.086943844 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:48:57 localhost podman[284715]: 2025-12-02 09:48:57.479420512 +0000 UTC m=+0.123435056 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:48:57 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:48:58 localhost nova_compute[281854]: 2025-12-02 09:48:58.441 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'flavor' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:58 localhost nova_compute[281854]: 2025-12-02 09:48:58.458 281858 DEBUG oslo_concurrency.lockutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:48:58 localhost nova_compute[281854]: 2025-12-02 09:48:58.459 281858 DEBUG oslo_concurrency.lockutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:48:58 localhost nova_compute[281854]: 2025-12-02 09:48:58.459 281858 DEBUG nova.network.neutron [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 04:48:58 localhost nova_compute[281854]: 2025-12-02 09:48:58.460 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.518 281858 DEBUG nova.network.neutron [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.531 281858 DEBUG oslo_concurrency.lockutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.555 281858 INFO nova.virt.libvirt.driver [-] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Instance destroyed successfully.#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.555 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'numa_topology' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.567 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'resources' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.579 281858 DEBUG nova.virt.libvirt.vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-02T09:48:32Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.579 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.580 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.581 281858 DEBUG os_vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.584 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a318f6a-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.587 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.589 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.593 281858 INFO os_vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3')#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.596 281858 DEBUG nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.596 281858 INFO nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] UEFI support detected#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.604 281858 DEBUG nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Start _get_guest_xml network_info=[{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}], 'ephemerals': [{'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'size': 1, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.608 281858 WARNING nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.611 281858 DEBUG nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.611 281858 DEBUG nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.613 281858 DEBUG nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.614 281858 DEBUG nova.virt.libvirt.host [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.615 281858 DEBUG nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.615 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T08:30:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='45a99238-6f19-4f9e-be82-6ef3af1dcb31',id=2,is_public=True,memory_mb=512,name='m1.small',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.616 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.616 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.617 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.617 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.618 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.618 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.619 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.619 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.619 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.620 281858 DEBUG nova.virt.hardware [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.620 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.636 281858 DEBUG nova.privsep.utils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 2 04:48:59 localhost nova_compute[281854]: 2025-12-02 09:48:59.637 281858 DEBUG oslo_concurrency.processutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.125 281858 DEBUG oslo_concurrency.processutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.126 281858 DEBUG oslo_concurrency.processutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.592 281858 DEBUG oslo_concurrency.processutils [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.594 281858 DEBUG nova.virt.libvirt.vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-02T09:48:32Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.594 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.595 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.597 281858 DEBUG nova.objects.instance [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Lazy-loading 'pci_devices' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.611 281858 DEBUG nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] End _get_guest_xml xml= Dec 2 04:49:00 localhost nova_compute[281854]: b254bb7f-2891-4b37-9c44-9700e301ce16 Dec 2 04:49:00 localhost nova_compute[281854]: instance-00000002 Dec 2 04:49:00 localhost nova_compute[281854]: 524288 Dec 2 04:49:00 localhost nova_compute[281854]: 1 Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: test Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:48:59 Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: 512 Dec 2 04:49:00 localhost nova_compute[281854]: 1 Dec 2 04:49:00 localhost nova_compute[281854]: 0 Dec 2 04:49:00 localhost nova_compute[281854]: 1 Dec 2 04:49:00 localhost nova_compute[281854]: 1 Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: admin Dec 2 04:49:00 localhost nova_compute[281854]: admin Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: RDO Dec 2 04:49:00 localhost nova_compute[281854]: OpenStack Compute Dec 2 04:49:00 localhost nova_compute[281854]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 04:49:00 localhost nova_compute[281854]: b254bb7f-2891-4b37-9c44-9700e301ce16 Dec 2 04:49:00 localhost nova_compute[281854]: b254bb7f-2891-4b37-9c44-9700e301ce16 Dec 2 04:49:00 localhost nova_compute[281854]: Virtual Machine Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: hvm Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: /dev/urandom Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: Dec 2 04:49:00 localhost nova_compute[281854]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.612 281858 DEBUG nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.612 281858 DEBUG nova.virt.libvirt.driver [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.613 281858 DEBUG nova.virt.libvirt.vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T08:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005541913.localdomain',hostname='test',id=2,image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T08:31:55Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=,power_state=4,progress=0,project_id='e2d97696ab6749899bb8ba5ce29a3de2',ramdisk_id='',reservation_id='r-6ofcfgb6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-02T09:48:32Z,user_data=None,user_id='cb8b7d2a63b642aa999db12e17eeb9e4',uuid=b254bb7f-2891-4b37-9c44-9700e301ce16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.613 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converting VIF {"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.613 281858 DEBUG nova.network.os_vif_util [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.614 281858 DEBUG os_vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.614 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.615 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.615 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.617 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.617 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a318f6a-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.618 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a318f6a-b3, col_values=(('external_ids', {'iface-id': '4a318f6a-b3c1-4690-8246-f7d046ccd64a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:b2:03', 'vm-uuid': 'b254bb7f-2891-4b37-9c44-9700e301ce16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.619 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.621 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.624 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.625 281858 INFO os_vif [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:b2:03,bridge_name='br-int',has_traffic_filtering=True,id=4a318f6a-b3c1-4690-8246-f7d046ccd64a,network=Network(595e1c9b-709c-41d2-9212-0b18b13291a8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a318f6a-b3')#033[00m Dec 2 04:49:00 localhost systemd[1]: Started libvirt secret daemon. Dec 2 04:49:00 localhost kernel: device tap4a318f6a-b3 entered promiscuous mode Dec 2 04:49:00 localhost NetworkManager[5965]: [1764668940.7296] manager: (tap4a318f6a-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.731 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00067|binding|INFO|Claiming lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a for this chassis. Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00068|binding|INFO|4a318f6a-b3c1-4690-8246-f7d046ccd64a: Claiming fa:16:3e:26:b2:03 192.168.0.102 Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.736 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost systemd-udevd[284811]: Network interface NamePolicy= disabled on kernel command line. Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.742 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00069|ovn_bfd|INFO|Enabled BFD on interface ovn-be95dc-0 Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00070|ovn_bfd|INFO|Enabled BFD on interface ovn-2587fe-0 Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00071|ovn_bfd|INFO|Enabled BFD on interface ovn-4d166c-0 Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.751 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.747 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:b2:03 192.168.0.102'], port_security=['fa:16:3e:26:b2:03 192.168.0.102'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.102/24', 'neutron:device_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1725f11b-f73c-4c4f-b3d3-772d68fcc09e 23293c48-39ca-43a0-a462-ebc8626a7f6c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4a318f6a-b3c1-4690-8246-f7d046ccd64a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.749 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4a318f6a-b3c1-4690-8246-f7d046ccd64a in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 bound to our chassis#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.751 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 595e1c9b-709c-41d2-9212-0b18b13291a8#033[00m Dec 2 04:49:00 localhost NetworkManager[5965]: [1764668940.7554] device (tap4a318f6a-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 04:49:00 localhost NetworkManager[5965]: [1764668940.7559] device (tap4a318f6a-b3): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.761 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[640f5e76-ce0a-454e-94c9-f7a1e0c19217]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.762 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap595e1c9b-71 in ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.764 160340 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap595e1c9b-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.764 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[55b1b839-68f7-4b71-83e4-81738cb153e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.766 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[bed7f676-ce4b-4c7b-b633-bc4bd2b3416b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.776 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.779 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0aac6a-9db1-4c40-b9fd-23a8c7ae6dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost systemd-machined[84262]: New machine qemu-2-instance-00000002. Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.795 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00072|binding|INFO|Setting lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a ovn-installed in OVS Dec 2 04:49:00 localhost ovn_controller[154505]: 2025-12-02T09:49:00Z|00073|binding|INFO|Setting lport 4a318f6a-b3c1-4690-8246-f7d046ccd64a up in Southbound Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.803 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000002. Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.821 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[989fc9af-7c82-492b-a23c-ad91846466fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost nova_compute[281854]: 2025-12-02 09:49:00.832 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.851 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[123467d3-e9bf-47b5-a513-6ea458c6ea15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost systemd-udevd[284812]: Network interface NamePolicy= disabled on kernel command line. Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.860 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5622ad-6a0f-4d6c-b01c-6a3f2a7a026e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost NetworkManager[5965]: [1764668940.8630] manager: (tap595e1c9b-70): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.891 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[379a235f-0d02-4c7e-a741-884babad1ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.895 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[06cc937f-d64a-4a80-aca1-9be6bd2822c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap595e1c9b-71: link becomes ready Dec 2 04:49:00 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap595e1c9b-70: link becomes ready Dec 2 04:49:00 localhost NetworkManager[5965]: [1764668940.9127] device (tap595e1c9b-70): carrier: link connected Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.915 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8a5265-7337-4c36-a823-d26b81e98bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.932 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[566e0857-e806-4b50-8ea9-40e7765328e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap595e1c9b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e8:5a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1110306, 'reachable_time': 17832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284849, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.948 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0efd23-c238-4ecb-b90b-f25c4c0c6f21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:5a19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1110306, 'tstamp': 1110306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284857, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.964 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[69ff36fa-7278-4a50-8625-d9f6c0614621]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap595e1c9b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e8:5a:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1110306, 'reachable_time': 17832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284866, 'error': None, 'target': 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:00 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:00.994 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4faabd-cac2-471b-b0d1-c2c3c2e1311a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.058 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcc03f3-8883-485c-800b-f2a001ea4f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.060 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap595e1c9b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.061 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.061 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap595e1c9b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.063 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost kernel: device tap595e1c9b-70 entered promiscuous mode Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.068 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.070 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap595e1c9b-70, col_values=(('external_ids', {'iface-id': 'd6e7da3f-8574-49e0-8ba1-2f642b3cec92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.071 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost ovn_controller[154505]: 2025-12-02T09:49:01Z|00074|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.080 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.081 160221 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.083 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[15eb07b8-fc6b-4faf-98b7-3541fe483d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.085 160221 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: global Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: log /dev/log local0 debug Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: log-tag haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8 Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: user root Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: group root Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: maxconn 1024 Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: pidfile /var/lib/neutron/external/pids/595e1c9b-709c-41d2-9212-0b18b13291a8.pid.haproxy Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: daemon Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: defaults Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: log global Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: mode http Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: option httplog Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: option dontlognull Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: option http-server-close Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: option forwardfor Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: retries 3 Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: timeout http-request 30s Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: timeout connect 30s Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: timeout client 32s Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: timeout server 32s Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: timeout http-keep-alive 30s Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: listen listener Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: bind 169.254.169.254:80 Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: server metadata /var/lib/neutron/metadata_proxy Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: http-request add-header X-OVN-Network-ID 595e1c9b-709c-41d2-9212-0b18b13291a8 Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 04:49:01 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:01.088 160221 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8', 'env', 'PROCESS_TAG=haproxy-595e1c9b-709c-41d2-9212-0b18b13291a8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/595e1c9b-709c-41d2-9212-0b18b13291a8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.177 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.178 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] VM Resumed (Lifecycle Event)#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.189 281858 DEBUG nova.compute.manager [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.208 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.214 281858 INFO nova.virt.libvirt.driver [-] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Instance rebooted successfully.#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.215 281858 DEBUG nova.compute.manager [None req-d355f445-1690-4948-9eb0-b65105b7c945 cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.216 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.261 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.262 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.262 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] VM Started (Lifecycle Event)#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.293 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.302 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.516 281858 DEBUG nova.compute.manager [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.518 281858 DEBUG oslo_concurrency.lockutils [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.519 281858 DEBUG oslo_concurrency.lockutils [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.520 281858 DEBUG oslo_concurrency.lockutils [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.521 281858 DEBUG nova.compute.manager [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] No waiting events found dispatching network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.521 281858 WARNING nova.compute.manager [req-445e87c6-82f2-445d-ae68-1a346e15005b req-888e82d1-4cf6-43ea-a253-3119ad0798f9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received unexpected event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a for instance with vm_state active and task_state None.#033[00m Dec 2 04:49:01 localhost podman[284926]: Dec 2 04:49:01 localhost podman[284926]: 2025-12-02 09:49:01.57803463 +0000 UTC m=+0.105719869 container create 4bf88e5dd3d6887471d25c63df52897e585725af1f4acc121cd653bb392a20e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 04:49:01 localhost systemd[1]: Started libpod-conmon-4bf88e5dd3d6887471d25c63df52897e585725af1f4acc121cd653bb392a20e9.scope. Dec 2 04:49:01 localhost podman[284926]: 2025-12-02 09:49:01.523200124 +0000 UTC m=+0.050885353 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 04:49:01 localhost ovn_controller[154505]: 2025-12-02T09:49:01Z|00075|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.650 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost systemd[1]: Started libcrun container. Dec 2 04:49:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a163f6cf84ad26a81b0d772b07c48e80c4153a70edcd04a5d728b961dd8a35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 04:49:01 localhost podman[284926]: 2025-12-02 09:49:01.694984904 +0000 UTC m=+0.222670133 container init 4bf88e5dd3d6887471d25c63df52897e585725af1f4acc121cd653bb392a20e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 04:49:01 localhost podman[284926]: 2025-12-02 09:49:01.708398517 +0000 UTC m=+0.236083746 container start 4bf88e5dd3d6887471d25c63df52897e585725af1f4acc121cd653bb392a20e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.712 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost ovn_controller[154505]: 2025-12-02T09:49:01Z|00076|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:01 localhost ovn_controller[154505]: 2025-12-02T09:49:01Z|00077|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:01 localhost nova_compute[281854]: 2025-12-02 09:49:01.738 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:01 localhost neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8[284940]: [NOTICE] (284944) : New worker (284946) forked Dec 2 04:49:01 localhost neutron-haproxy-ovnmeta-595e1c9b-709c-41d2-9212-0b18b13291a8[284940]: [NOTICE] (284944) : Loading success. Dec 2 04:49:02 localhost ovn_controller[154505]: 2025-12-02T09:49:02Z|00078|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:02 localhost nova_compute[281854]: 2025-12-02 09:49:02.607 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:02 localhost ovn_controller[154505]: 2025-12-02T09:49:02Z|00079|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 04:49:02 localhost nova_compute[281854]: 2025-12-02 09:49:02.734 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:02 localhost snmpd[69635]: IfIndex of an interface changed. Such interfaces will appear multiple times in IF-MIB. Dec 2 04:49:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:03.033 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:03.034 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:03.034 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.555 281858 DEBUG nova.compute.manager [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.555 281858 DEBUG oslo_concurrency.lockutils [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.556 281858 DEBUG oslo_concurrency.lockutils [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.556 281858 DEBUG oslo_concurrency.lockutils [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.556 281858 DEBUG nova.compute.manager [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] No waiting events found dispatching network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 04:49:03 localhost nova_compute[281854]: 2025-12-02 09:49:03.556 281858 WARNING nova.compute.manager [req-c342436f-186e-40d9-8d18-cccac88faf73 req-5e3a6970-2563-45d0-ac97-f54bb8a5ce1b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Received unexpected event network-vif-plugged-4a318f6a-b3c1-4690-8246-f7d046ccd64a for instance with vm_state active and task_state None.#033[00m Dec 2 04:49:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48716 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6C3BD0000000001030307) Dec 2 04:49:04 localhost openstack_network_exporter[242845]: ERROR 09:49:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:49:04 localhost openstack_network_exporter[242845]: ERROR 09:49:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:04 localhost openstack_network_exporter[242845]: ERROR 09:49:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:04 localhost openstack_network_exporter[242845]: ERROR 09:49:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:49:04 localhost openstack_network_exporter[242845]: Dec 2 04:49:04 localhost openstack_network_exporter[242845]: ERROR 09:49:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:49:04 localhost openstack_network_exporter[242845]: Dec 2 04:49:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:49:04 localhost podman[284955]: 2025-12-02 09:49:04.494636769 +0000 UTC m=+0.120303053 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:49:04 localhost podman[284955]: 2025-12-02 09:49:04.512212832 +0000 UTC m=+0.137879146 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:49:04 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:49:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48717 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6C7E40000000001030307) Dec 2 04:49:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30377 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6C9E40000000001030307) Dec 2 04:49:05 localhost nova_compute[281854]: 2025-12-02 09:49:05.619 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:06 localhost podman[240799]: time="2025-12-02T09:49:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:49:06 localhost podman[240799]: @ - - [02/Dec/2025:09:49:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147735 "" "Go-http-client/1.1" Dec 2 04:49:06 localhost podman[240799]: @ - - [02/Dec/2025:09:49:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17216 "" "Go-http-client/1.1" Dec 2 04:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:49:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:49:06 localhost nova_compute[281854]: 2025-12-02 09:49:06.685 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:06 localhost podman[284993]: 2025-12-02 09:49:06.742424522 +0000 UTC m=+0.092550962 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 2 04:49:06 localhost podman[284993]: 2025-12-02 09:49:06.797520345 +0000 UTC m=+0.147646785 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 04:49:06 localhost systemd[1]: tmp-crun.zU9Ivg.mount: Deactivated successfully. Dec 2 04:49:06 localhost podman[284992]: 2025-12-02 09:49:06.807677952 +0000 UTC m=+0.157662628 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:49:06 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:49:06 localhost podman[284992]: 2025-12-02 09:49:06.822917294 +0000 UTC m=+0.172901940 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:49:06 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:49:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48718 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6CFE40000000001030307) Dec 2 04:49:07 localhost sshd[285074]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:49:08 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22746 DF PROTO=TCP SPT=49628 DPT=9102 SEQ=420584671 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6D3E40000000001030307) Dec 2 04:49:10 localhost nova_compute[281854]: 2025-12-02 09:49:10.621 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48719 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6DFA40000000001030307) Dec 2 04:49:11 localhost nova_compute[281854]: 2025-12-02 09:49:11.730 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:15 localhost ovn_controller[154505]: 2025-12-02T09:49:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:b2:03 192.168.0.102 Dec 2 04:49:15 localhost nova_compute[281854]: 2025-12-02 09:49:15.623 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:16 localhost nova_compute[281854]: 2025-12-02 09:49:16.775 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:49:17 localhost podman[285108]: 2025-12-02 09:49:17.461595069 +0000 UTC m=+0.099402701 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm) Dec 2 04:49:17 localhost podman[285108]: 2025-12-02 09:49:17.470638218 +0000 UTC m=+0.108445840 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:49:17 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:49:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48720 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A6FFE50000000001030307) Dec 2 04:49:20 localhost nova_compute[281854]: 2025-12-02 09:49:20.190 281858 DEBUG nova.compute.manager [None req-25b1fd2e-b51e-4a48-93f6-5c9e237f556a cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 04:49:20 localhost nova_compute[281854]: 2025-12-02 09:49:20.196 281858 INFO nova.compute.manager [None req-25b1fd2e-b51e-4a48-93f6-5c9e237f556a cb8b7d2a63b642aa999db12e17eeb9e4 e2d97696ab6749899bb8ba5ce29a3de2 - - default default] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Retrieving diagnostics#033[00m Dec 2 04:49:20 localhost nova_compute[281854]: 2025-12-02 09:49:20.640 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:21.203 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:21.205 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:21 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:49:21 localhost podman[285127]: 2025-12-02 09:49:21.451093881 +0000 UTC m=+0.088853644 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:49:21 localhost podman[285127]: 2025-12-02 09:49:21.455735793 +0000 UTC m=+0.093495506 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:49:21 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:49:21 localhost nova_compute[281854]: 2025-12-02 09:49:21.810 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.374 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.375 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 1.1700296#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35664 [02/Dec/2025:09:49:21.201] listener listener/metadata 0/0/0/1173/1173 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.392 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.393 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-keys HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35670 [02/Dec/2025:09:49:22.391] listener listener/metadata 0/0/0/21/21 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.413 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 404 len: 297 time: 0.0200615#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.423 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.423 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.437 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35676 [02/Dec/2025:09:49:22.422] listener listener/metadata 0/0/0/15/15 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.438 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 0.0144057#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.443 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.443 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/ami-launch-index HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.456 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.456 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 136 time: 0.0131860#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35690 [02/Dec/2025:09:49:22.442] listener listener/metadata 0/0/0/14/14 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.462 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.462 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-type HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.476 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35702 [02/Dec/2025:09:49:22.461] listener listener/metadata 0/0/0/14/14 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.476 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 143 time: 0.0137148#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.481 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.482 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-ipv4 HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.493 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.494 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 149 time: 0.0123684#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35714 [02/Dec/2025:09:49:22.480] listener listener/metadata 0/0/0/13/13 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.499 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.499 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-ipv4 HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.511 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35724 [02/Dec/2025:09:49:22.498] listener listener/metadata 0/0/0/13/13 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.512 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 150 time: 0.0127134#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.516 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.517 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/hostname HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.528 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35726 [02/Dec/2025:09:49:22.516] listener listener/metadata 0/0/0/12/12 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.529 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/hostname HTTP/1.1" status: 200 len: 139 time: 0.0117881#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.538 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.539 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-hostname HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.552 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35738 [02/Dec/2025:09:49:22.538] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.552 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" status: 200 len: 139 time: 0.0132070#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.560 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.561 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35754 [02/Dec/2025:09:49:22.560] listener listener/metadata 0/0/0/14/14 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.574 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/user-data HTTP/1.1" status: 404 len: 297 time: 0.0129514#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.590 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.591 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.606 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.607 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" status: 200 len: 155 time: 0.0158966#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35760 [02/Dec/2025:09:49:22.590] listener listener/metadata 0/0/0/16/16 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.614 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.615 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.629 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.630 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 138 time: 0.0144751#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35766 [02/Dec/2025:09:49:22.613] listener listener/metadata 0/0/0/17/17 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.636 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.637 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.651 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35774 [02/Dec/2025:09:49:22.636] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.651 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" status: 200 len: 143 time: 0.0138040#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.657 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.658 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.670 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35778 [02/Dec/2025:09:49:22.656] listener listener/metadata 0/0/0/13/13 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.670 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 143 time: 0.0127642#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.677 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.678 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-hostname HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.691 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35790 [02/Dec/2025:09:49:22.677] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.692 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" status: 200 len: 139 time: 0.0140796#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.699 160335 DEBUG eventlet.wsgi.server [-] (160335) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.700 160335 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Accept: */*#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Connection: close#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Content-Type: text/plain#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: Host: 169.254.169.254#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: User-Agent: curl/7.84.0#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Forwarded-For: 192.168.0.102#015 Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: X-Ovn-Network-Id: 595e1c9b-709c-41d2-9212-0b18b13291a8 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.712 160335 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 2 04:49:22 localhost haproxy-metadata-proxy-595e1c9b-709c-41d2-9212-0b18b13291a8[284946]: 192.168.0.102:35798 [02/Dec/2025:09:49:22.698] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 2 04:49:22 localhost ovn_metadata_agent[160216]: 2025-12-02 09:49:22.713 160335 INFO eventlet.wsgi.server [-] 192.168.0.102, "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.0132546#033[00m Dec 2 04:49:25 localhost nova_compute[281854]: 2025-12-02 09:49:25.673 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:49:26 localhost podman[285146]: 2025-12-02 09:49:26.455128864 +0000 UTC m=+0.089613224 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git) Dec 2 04:49:26 localhost podman[285146]: 2025-12-02 09:49:26.472233085 +0000 UTC m=+0.106717425 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:49:26 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:49:26 localhost nova_compute[281854]: 2025-12-02 09:49:26.847 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:49:28 localhost systemd[1]: tmp-crun.D9gWTO.mount: Deactivated successfully. Dec 2 04:49:28 localhost podman[285166]: 2025-12-02 09:49:28.444523343 +0000 UTC m=+0.084357825 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:49:28 localhost podman[285166]: 2025-12-02 09:49:28.45428545 +0000 UTC m=+0.094119932 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:49:28 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:49:30 localhost nova_compute[281854]: 2025-12-02 09:49:30.718 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:30 localhost ovn_controller[154505]: 2025-12-02T09:49:30Z|00080|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory Dec 2 04:49:31 localhost nova_compute[281854]: 2025-12-02 09:49:31.850 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26550 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A738EE0000000001030307) Dec 2 04:49:34 localhost openstack_network_exporter[242845]: ERROR 09:49:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:49:34 localhost openstack_network_exporter[242845]: ERROR 09:49:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:34 localhost openstack_network_exporter[242845]: ERROR 09:49:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:49:34 localhost openstack_network_exporter[242845]: ERROR 09:49:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:49:34 localhost openstack_network_exporter[242845]: Dec 2 04:49:34 localhost openstack_network_exporter[242845]: ERROR 09:49:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:49:34 localhost openstack_network_exporter[242845]: Dec 2 04:49:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26551 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A73CE50000000001030307) Dec 2 04:49:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:49:35 localhost podman[285189]: 2025-12-02 09:49:35.454590955 +0000 UTC m=+0.094085282 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 04:49:35 localhost podman[285189]: 2025-12-02 09:49:35.471243964 +0000 UTC m=+0.110738221 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:49:35 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:49:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48721 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A73FE40000000001030307) Dec 2 04:49:35 localhost nova_compute[281854]: 2025-12-02 09:49:35.768 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:36 localhost podman[240799]: time="2025-12-02T09:49:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:49:36 localhost podman[240799]: @ - - [02/Dec/2025:09:49:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147735 "" "Go-http-client/1.1" Dec 2 04:49:36 localhost podman[240799]: @ - - [02/Dec/2025:09:49:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17229 "" "Go-http-client/1.1" Dec 2 04:49:36 localhost nova_compute[281854]: 2025-12-02 09:49:36.854 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26552 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A744E40000000001030307) Dec 2 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:49:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:49:37 localhost systemd[1]: tmp-crun.s4UtcN.mount: Deactivated successfully. Dec 2 04:49:37 localhost podman[285209]: 2025-12-02 09:49:37.457790707 +0000 UTC m=+0.092681275 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:49:37 localhost podman[285209]: 2025-12-02 09:49:37.463624591 +0000 UTC m=+0.098515159 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:49:37 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:49:37 localhost podman[285210]: 2025-12-02 09:49:37.544564095 +0000 UTC m=+0.173867725 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 2 04:49:37 localhost podman[285210]: 2025-12-02 09:49:37.583089322 +0000 UTC m=+0.212393022 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:49:37 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:49:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30378 DF PROTO=TCP SPT=40134 DPT=9102 SEQ=1121546869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A747E40000000001030307) Dec 2 04:49:40 localhost nova_compute[281854]: 2025-12-02 09:49:40.771 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26553 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A754A40000000001030307) Dec 2 04:49:41 localhost nova_compute[281854]: 2025-12-02 09:49:41.893 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.523 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.524 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.552 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.553 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.554 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.773 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.787 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.788 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.788 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:49:45 localhost nova_compute[281854]: 2025-12-02 09:49:45.788 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.136 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.159 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.160 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.161 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.161 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.161 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.162 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.162 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.163 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.163 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.164 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.182 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.183 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.183 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.184 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.184 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.622 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.695 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.695 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.947 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.986 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.988 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12288MB free_disk=41.8370246887207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.989 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:49:46 localhost nova_compute[281854]: 2025-12-02 09:49:46.990 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.139 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.139 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.140 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.177 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.665 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.670 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.685 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.706 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:49:47 localhost nova_compute[281854]: 2025-12-02 09:49:47.706 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:49:48 localhost podman[285300]: 2025-12-02 09:49:48.44258672 +0000 UTC m=+0.085824274 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:49:48 localhost podman[285300]: 2025-12-02 09:49:48.456582139 +0000 UTC m=+0.099819703 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 04:49:48 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:49:48 localhost snmpd[69635]: empty variable list in _query Dec 2 04:49:48 localhost snmpd[69635]: empty variable list in _query Dec 2 04:49:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26554 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A775E40000000001030307) Dec 2 04:49:50 localhost nova_compute[281854]: 2025-12-02 09:49:50.778 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:51 localhost nova_compute[281854]: 2025-12-02 09:49:51.993 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:49:52 localhost systemd[1]: tmp-crun.74YMdT.mount: Deactivated successfully. Dec 2 04:49:52 localhost podman[285319]: 2025-12-02 09:49:52.420134715 +0000 UTC m=+0.066431912 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:49:52 localhost podman[285319]: 2025-12-02 09:49:52.429061232 +0000 UTC m=+0.075358449 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:49:52 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:49:55 localhost nova_compute[281854]: 2025-12-02 09:49:55.810 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:57 localhost nova_compute[281854]: 2025-12-02 09:49:57.031 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:49:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:49:57 localhost podman[285338]: 2025-12-02 09:49:57.561704107 +0000 UTC m=+0.195776284 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, version=9.6) Dec 2 04:49:57 localhost podman[285338]: 2025-12-02 09:49:57.579091885 +0000 UTC m=+0.213164032 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible) Dec 2 04:49:57 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:49:59 localhost systemd[1]: tmp-crun.E7kisL.mount: Deactivated successfully. Dec 2 04:49:59 localhost podman[285358]: 2025-12-02 09:49:59.452079945 +0000 UTC m=+0.088214818 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:49:59 localhost podman[285358]: 2025-12-02 09:49:59.467391288 +0000 UTC m=+0.103526151 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:49:59 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:50:00 localhost nova_compute[281854]: 2025-12-02 09:50:00.851 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:02 localhost nova_compute[281854]: 2025-12-02 09:50:02.074 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:50:03.034 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:50:03.034 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:50:03.035 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21187 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7AE1E0000000001030307) Dec 2 04:50:04 localhost openstack_network_exporter[242845]: ERROR 09:50:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:50:04 localhost openstack_network_exporter[242845]: ERROR 09:50:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:04 localhost openstack_network_exporter[242845]: ERROR 09:50:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:04 localhost openstack_network_exporter[242845]: ERROR 09:50:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:50:04 localhost openstack_network_exporter[242845]: Dec 2 04:50:04 localhost openstack_network_exporter[242845]: ERROR 09:50:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:50:04 localhost openstack_network_exporter[242845]: Dec 2 04:50:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21188 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7B2240000000001030307) Dec 2 04:50:05 localhost nova_compute[281854]: 2025-12-02 09:50:05.873 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26555 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7B5E40000000001030307) Dec 2 04:50:06 localhost podman[240799]: time="2025-12-02T09:50:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:50:06 localhost podman[240799]: @ - - [02/Dec/2025:09:50:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147735 "" "Go-http-client/1.1" Dec 2 04:50:06 localhost podman[240799]: @ - - [02/Dec/2025:09:50:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17221 "" "Go-http-client/1.1" Dec 2 04:50:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:50:06 localhost systemd[1]: tmp-crun.vuUnHn.mount: Deactivated successfully. Dec 2 04:50:06 localhost podman[285382]: 2025-12-02 09:50:06.439856019 +0000 UTC m=+0.081891791 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:50:06 localhost podman[285382]: 2025-12-02 09:50:06.450045288 +0000 UTC m=+0.092081020 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 04:50:06 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:50:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21189 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7BA250000000001030307) Dec 2 04:50:07 localhost nova_compute[281854]: 2025-12-02 09:50:07.077 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48722 DF PROTO=TCP SPT=45224 DPT=9102 SEQ=2646252626 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7BDE40000000001030307) Dec 2 04:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:50:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:50:08 localhost podman[285420]: 2025-12-02 09:50:08.319856653 +0000 UTC m=+0.100135962 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 04:50:08 localhost podman[285420]: 2025-12-02 09:50:08.364046368 +0000 UTC m=+0.144325707 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 2 04:50:08 localhost podman[285419]: 2025-12-02 09:50:08.373832896 +0000 UTC m=+0.155926872 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:50:08 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:50:08 localhost podman[285419]: 2025-12-02 09:50:08.385100283 +0000 UTC m=+0.167194239 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:50:08 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:50:10 localhost nova_compute[281854]: 2025-12-02 09:50:10.918 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21190 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7C9E40000000001030307) Dec 2 04:50:12 localhost nova_compute[281854]: 2025-12-02 09:50:12.118 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:15 localhost nova_compute[281854]: 2025-12-02 09:50:15.922 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.104 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.105 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.116 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.117 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0e54fa1-2585-4df7-8756-503f49e86e09', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.105712', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e05dfb0-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': '554add937cec48679c5210ee00a88a02d99aa5e5c0c4bb0a2307300d99b1b1c1'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.105712', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e05ef28-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': '0052c2ead7341547bb420e44b0a71f3b455237f1ade2ed02d86d2f46c8db2db2'}]}, 'timestamp': '2025-12-02 09:50:16.117832', '_unique_id': '545bc74cfc0f442d847722eb71b4ed24'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.119 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.122 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c9f7aee5-57a2-45a2-b96d-20fee887e2f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.120003', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e06c858-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '3506b02b9e0b87b6a8fbf1beac3551bf5fa1196ba7b08bf5f571eec6ae37c546'}]}, 'timestamp': '2025-12-02 09:50:16.123368', '_unique_id': 'bb2d99ee08b041a1b1fe3eee0fd2ffb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.124 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.125 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '38583897-022c-4437-8d4e-f9206870bae7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.125057', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e07175e-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': 'dfd61991f10811d564a0bf0827c3a60b28418b8952266e521f95c22d8b89a981'}]}, 'timestamp': '2025-12-02 09:50:16.125413', '_unique_id': '93672f6d38ba49c0ac3436c60285422c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.126 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0c3a600-5649-46cb-abd7-17132668d8c3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.126857', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e075d5e-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '82d5010a87b1ce48ecb672736ebcc8fda122b3358cdb4874f161a609a5a06c6b'}]}, 'timestamp': '2025-12-02 09:50:16.127163', '_unique_id': '7d04b3b5a8214b3295c08692127a24a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.128 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b553321b-994c-4d0b-b506-94084b79e17a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:50:16.128522', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4e0af450-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.369247024, 'message_signature': '6a482c5dbe451f6fbb43c554fe1eed163223c8af3db1c6d8291cde71812c7e4e'}]}, 'timestamp': '2025-12-02 09:50:16.150735', '_unique_id': '14f4765dc76846679ead70e5a5ed2b79'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.152 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.152 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df94767f-1f7f-4bf1-96ce-4a5d7a84f5f7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.152341', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e0b40f4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '2961f6f64439e80f50d6d8dc670d74e62862b5b546bdf06cc23d60c296b758b8'}]}, 'timestamp': '2025-12-02 09:50:16.152675', '_unique_id': '085d503e765948efb50fe07479388630'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.153 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.154 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.154 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.154 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '727ef592-a4d6-4b3a-8fc7-4a5981de15a6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.154170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e0b8820-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': '5bef71a7c4a4af894ac6b499154c9971d991b55a12cec74b302d9d493add8f75'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.154170', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e0b92ac-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': '0b3a7c96045d10a4e83302138583f3ebe3c0bd0a24558e7459d777ce7f260eb5'}]}, 'timestamp': '2025-12-02 09:50:16.154745', '_unique_id': 'b8c49f74db3748709e0fba383cd5367c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.156 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.156 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '24586927-4b87-4496-be96-cae5b6984a2a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.156106', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e0bd438-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': 'd72e6f296c29b5b9abb19ad8cd9ba208e18d2d2456cd7a8648abdc53ac0435da'}]}, 'timestamp': '2025-12-02 09:50:16.156415', '_unique_id': '2bb30df039e14f48beb72f9196d93e88'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.157 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.180 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.180 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'efc3b22b-dae9-46d6-bcc1-251500a91672', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.157791', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e0f8a38-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': 'a160e54033ec7a5f3a530a1a85705f84568a4cb6c15d1b2c058c0f630d1b800c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.157791', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e0f99c4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '29fb25f813bf8ae748c078033b9d84f158b3b5d1e182419ace792391831f73ab'}]}, 'timestamp': '2025-12-02 09:50:16.181135', '_unique_id': 'e08fbd9849af49e1bd953431ba597ec4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.183 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.183 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f0a654c8-bf9c-4047-ae15-76dab10310ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.183310', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e0ffd74-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '6c554ecd1d8d7b3d9774618e8d764c9c044d2503706c3bc55057318b00cf5a20'}]}, 'timestamp': '2025-12-02 09:50:16.183743', '_unique_id': '33dc5e51a53142b299dac81e5519b9e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.184 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.185 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '90fd18ed-6e35-42b2-a5c4-ec4b09d1f894', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.185349', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e104a2c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': 'b61f190199bec55615933002155256bc452b24645d08e590ea0d5de98889ebed'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.185349', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e10562a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '9eb7cfd0c498e6f7c4e49a54d293ea828fe5b665b99f040adf638121b4756d72'}]}, 'timestamp': '2025-12-02 09:50:16.186352', '_unique_id': '05ce8171ee084e4b9eed28035b5c4e76'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '83751ef5-5823-4120-b6d8-370a90048290', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.187988', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e10b14c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': 'e6804f548cbfb50295fe3852ade25577ee10428a4482fcd59b5aead4e1b5b5ef'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.187988', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e10bbd8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.324808052, 'message_signature': '766f5a8d1dd3b56bea829ec34cc5bf62cc80fee3d591a2db1bce8e2d56af8c10'}]}, 'timestamp': '2025-12-02 09:50:16.188543', '_unique_id': 'f67c4e9afee243e48aecf31358d393c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.190 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b2d6c2a0-78f2-48e5-bd98-9da8240c9528', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.190145', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e11061a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': 'f82704d6e0efe447354a1c833b9b88ccdb4b4c9dd8a4883bdc20d8b2f8df7e1f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.190145', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e11136c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': 'f58c32fe3fd063776e44b74af80a5ec80e08e8d0a0134231770c276fe56b329a'}]}, 'timestamp': '2025-12-02 09:50:16.190811', '_unique_id': '3332c925e9fe45de89bcbfc92e1c438f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.192 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.192 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7dc53f05-e361-45ce-8ad5-cc54bf1b24c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.192571', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e11668c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '43d133f7d18c851f0f7a6aef8e1f487cb55cb4b06627655022ee63b4db76beee'}]}, 'timestamp': '2025-12-02 09:50:16.192970', '_unique_id': '8e47daaa6f5543c28e044f155a99ea14'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.194 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a49c6580-4c4c-40bd-8377-1a559a5706c8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.194441', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e11ad40-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': '9af88ed2dedf63546ec98d3ad2272385ffb3abce99c7e0f1743100e0e5d84c08'}]}, 'timestamp': '2025-12-02 09:50:16.194766', '_unique_id': '92d4e8a5b8444ddfa4347fce1000085a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '599a3987-ce19-4311-9219-f29dbcb95a81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.196319', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e11f818-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '6e7837fc81d6b730675815f50bc05b87faa8f7eedf88ff2ce60ce3a3e3d3e847'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.196319', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e120696-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '3a64092aa32df51b3cee55852bdf17f006732988c26aa4af30b07c78b9e2e805'}]}, 'timestamp': '2025-12-02 09:50:16.197021', '_unique_id': '5fd943736ff94de8b1ec26fc388a15fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.198 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.198 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 12100000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c3c427cb-66b0-4efd-aa9c-68e071b252dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12100000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:50:16.198602', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '4e125380-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.369247024, 'message_signature': '93c966c4125a60a6d923dc094bd9db27926118452c9668b34b94dbb27463075e'}]}, 'timestamp': '2025-12-02 09:50:16.199069', '_unique_id': '582e60739c6f4a47aca2873a38d9dc0f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.200 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c0be7df4-d4bf-4ca3-a2cb-1afdb5cb64e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.200823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e12a678-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '0fa1d3652a942d89ade56b1cc8ccf486bae8e674526e356a60fe8b4fb8888672'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.200823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e12b2da-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '2b2d6b132ce99ca0356570c4b8f9e76c25225fbcf14ccd58a06f4e73566cb194'}]}, 'timestamp': '2025-12-02 09:50:16.201472', '_unique_id': '59c208a47e874865b08c29cd26203a1d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '110ad6c6-fd9c-4677-8dbb-67afe2881a23', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:50:16.203293', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4e13071c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': 'b0486c27e70dcc768611637ba8286d8521743131a16e58afa30dc6193ff2f6bb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:50:16.203293', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4e131392-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.376846415, 'message_signature': '36207a093461f7cbc2c991684a6f2a11c2beccc2880a9059af52f8d73e5cdc0d'}]}, 'timestamp': '2025-12-02 09:50:16.203900', '_unique_id': '683009b601ea4b7d8597485a16b691f0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.205 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.205 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5458f3d4-225f-477e-a45a-65375a9d1af6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.205403', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e13599c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': 'e7b52cbbf6a2066d8e7161551c126641c606caa42b3af5d7191a5b4c8ec2eafb'}]}, 'timestamp': '2025-12-02 09:50:16.205745', '_unique_id': '95e588b7447a4c2facccc8d22567e94c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.207 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '786dc63b-ef01-4e96-a1bb-afd3339dc25f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:50:16.207221', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4e13a122-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11178.339074939, 'message_signature': 'a14f35e4c35dea2ec85b79fa205f4e9c04e3470086a7b8b54179d8d41f5cbd65'}]}, 'timestamp': '2025-12-02 09:50:16.207586', '_unique_id': '67459c88f3fa4e6a994a71ca34a283af'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:50:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:50:16.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:50:17 localhost nova_compute[281854]: 2025-12-02 09:50:17.164 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21191 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A7E9E50000000001030307) Dec 2 04:50:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:50:19 localhost podman[285535]: 2025-12-02 09:50:19.55469616 +0000 UTC m=+0.190096615 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:50:19 localhost podman[285535]: 2025-12-02 09:50:19.565201366 +0000 UTC m=+0.200601871 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:50:19 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:50:20 localhost nova_compute[281854]: 2025-12-02 09:50:20.926 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:22 localhost nova_compute[281854]: 2025-12-02 09:50:22.195 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:50:23 localhost podman[285554]: 2025-12-02 09:50:23.452731787 +0000 UTC m=+0.086332317 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:50:23 localhost podman[285554]: 2025-12-02 09:50:23.46116107 +0000 UTC m=+0.094761520 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Dec 2 04:50:23 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:50:25 localhost nova_compute[281854]: 2025-12-02 09:50:25.979 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:27 localhost nova_compute[281854]: 2025-12-02 09:50:27.225 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:50:28 localhost podman[285574]: 2025-12-02 09:50:28.439543836 +0000 UTC m=+0.078199883 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:50:28 localhost podman[285574]: 2025-12-02 09:50:28.454600004 +0000 UTC m=+0.093256001 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal) Dec 2 04:50:28 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:50:30 localhost podman[285594]: 2025-12-02 09:50:30.437567583 +0000 UTC m=+0.081445928 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:50:30 localhost podman[285594]: 2025-12-02 09:50:30.514146533 +0000 UTC m=+0.158024828 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:50:30 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:50:31 localhost nova_compute[281854]: 2025-12-02 09:50:31.016 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:32 localhost nova_compute[281854]: 2025-12-02 09:50:32.256 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:33 localhost sshd[285617]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:50:33 localhost systemd-logind[757]: New session 62 of user zuul. Dec 2 04:50:33 localhost systemd[1]: Started Session 62 of User zuul. Dec 2 04:50:33 localhost python3[285639]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:50:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49892 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A8234E0000000001030307) Dec 2 04:50:34 localhost openstack_network_exporter[242845]: ERROR 09:50:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:34 localhost openstack_network_exporter[242845]: ERROR 09:50:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:50:34 localhost openstack_network_exporter[242845]: ERROR 09:50:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:50:34 localhost openstack_network_exporter[242845]: ERROR 09:50:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:50:34 localhost openstack_network_exporter[242845]: Dec 2 04:50:34 localhost openstack_network_exporter[242845]: ERROR 09:50:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:50:34 localhost openstack_network_exporter[242845]: Dec 2 04:50:34 localhost subscription-manager[285640]: Unregistered machine with identity: d1b4d74d-2a0e-41d6-a299-a10b4d7396a9 Dec 2 04:50:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49893 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A827640000000001030307) Dec 2 04:50:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21192 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A829E50000000001030307) Dec 2 04:50:36 localhost nova_compute[281854]: 2025-12-02 09:50:36.019 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:36 localhost podman[240799]: time="2025-12-02T09:50:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:50:36 localhost podman[240799]: @ - - [02/Dec/2025:09:50:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147735 "" "Go-http-client/1.1" Dec 2 04:50:36 localhost podman[240799]: @ - - [02/Dec/2025:09:50:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17230 "" "Go-http-client/1.1" Dec 2 04:50:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49894 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A82F640000000001030307) Dec 2 04:50:37 localhost nova_compute[281854]: 2025-12-02 09:50:37.296 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:50:37 localhost podman[285642]: 2025-12-02 09:50:37.458220424 +0000 UTC m=+0.095994023 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 04:50:37 localhost podman[285642]: 2025-12-02 09:50:37.50129442 +0000 UTC m=+0.139068019 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:50:37 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:50:38 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26556 DF PROTO=TCP SPT=39952 DPT=9102 SEQ=1568618815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A833E50000000001030307) Dec 2 04:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:50:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:50:39 localhost podman[285662]: 2025-12-02 09:50:39.189201219 +0000 UTC m=+0.099811943 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:50:39 localhost podman[285661]: 2025-12-02 09:50:39.160994815 +0000 UTC m=+0.081268854 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:50:39 localhost podman[285662]: 2025-12-02 09:50:39.235430678 +0000 UTC m=+0.146041372 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 04:50:39 localhost podman[285661]: 2025-12-02 09:50:39.241850597 +0000 UTC m=+0.162124646 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:50:39 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:50:39 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:50:41 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49895 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A83F240000000001030307) Dec 2 04:50:41 localhost nova_compute[281854]: 2025-12-02 09:50:41.066 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:42 localhost nova_compute[281854]: 2025-12-02 09:50:42.339 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:46 localhost nova_compute[281854]: 2025-12-02 09:50:46.113 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.344 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.708 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.709 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.709 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.709 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.770 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.770 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.771 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:50:47 localhost nova_compute[281854]: 2025-12-02 09:50:47.771 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.528 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.546 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.546 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.547 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.547 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.548 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.548 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.549 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.549 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.550 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.550 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.572 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.572 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.573 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.573 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:50:48 localhost nova_compute[281854]: 2025-12-02 09:50:48.574 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.011 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.072 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.073 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.274 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.278 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12293MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.278 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.279 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.346 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.346 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.347 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.387 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:50:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49896 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A85FE40000000001030307) Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.791 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.797 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.820 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.822 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:50:49 localhost nova_compute[281854]: 2025-12-02 09:50:49.822 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:50:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:50:50 localhost podman[285751]: 2025-12-02 09:50:50.448116636 +0000 UTC m=+0.084130364 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:50:50 localhost podman[285751]: 2025-12-02 09:50:50.461652867 +0000 UTC m=+0.097666575 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:50:50 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:50:51 localhost nova_compute[281854]: 2025-12-02 09:50:51.169 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:52 localhost nova_compute[281854]: 2025-12-02 09:50:52.371 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:50:54 localhost podman[285771]: 2025-12-02 09:50:54.442493802 +0000 UTC m=+0.077841053 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 04:50:54 localhost podman[285771]: 2025-12-02 09:50:54.449321705 +0000 UTC m=+0.084668946 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:50:54 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:50:56 localhost nova_compute[281854]: 2025-12-02 09:50:56.217 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:57 localhost nova_compute[281854]: 2025-12-02 09:50:57.403 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:50:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:50:59 localhost podman[285789]: 2025-12-02 09:50:59.463210642 +0000 UTC m=+0.101882818 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.) Dec 2 04:50:59 localhost podman[285789]: 2025-12-02 09:50:59.483454044 +0000 UTC m=+0.122126210 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:50:59 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:51:00 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 2 04:51:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:51:00 localhost systemd[1]: tmp-crun.m1WLTI.mount: Deactivated successfully. Dec 2 04:51:00 localhost podman[285811]: 2025-12-02 09:51:00.929727599 +0000 UTC m=+0.102319199 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:51:00 localhost podman[285811]: 2025-12-02 09:51:00.940031105 +0000 UTC m=+0.112622515 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:51:00 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:51:01 localhost nova_compute[281854]: 2025-12-02 09:51:01.247 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:02 localhost nova_compute[281854]: 2025-12-02 09:51:02.439 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:51:03.035 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:51:03.036 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:51:03.037 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3477 DF PROTO=TCP SPT=54006 DPT=9102 SEQ=2645994212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A8987E0000000001030307) Dec 2 04:51:04 localhost openstack_network_exporter[242845]: ERROR 09:51:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:04 localhost openstack_network_exporter[242845]: ERROR 09:51:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:51:04 localhost openstack_network_exporter[242845]: ERROR 09:51:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:04 localhost openstack_network_exporter[242845]: ERROR 09:51:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:51:04 localhost openstack_network_exporter[242845]: Dec 2 04:51:04 localhost openstack_network_exporter[242845]: ERROR 09:51:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:51:04 localhost openstack_network_exporter[242845]: Dec 2 04:51:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3478 DF PROTO=TCP SPT=54006 DPT=9102 SEQ=2645994212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A89CA40000000001030307) Dec 2 04:51:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49897 DF PROTO=TCP SPT=50452 DPT=9102 SEQ=637600805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A89FE40000000001030307) Dec 2 04:51:06 localhost podman[240799]: time="2025-12-02T09:51:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:51:06 localhost podman[240799]: @ - - [02/Dec/2025:09:51:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147735 "" "Go-http-client/1.1" Dec 2 04:51:06 localhost podman[240799]: @ - - [02/Dec/2025:09:51:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17240 "" "Go-http-client/1.1" Dec 2 04:51:06 localhost nova_compute[281854]: 2025-12-02 09:51:06.277 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3479 DF PROTO=TCP SPT=54006 DPT=9102 SEQ=2645994212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A8A4A40000000001030307) Dec 2 04:51:07 localhost nova_compute[281854]: 2025-12-02 09:51:07.484 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21193 DF PROTO=TCP SPT=46744 DPT=9102 SEQ=2091678735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A8A7E40000000001030307) Dec 2 04:51:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:51:08 localhost podman[285947]: 2025-12-02 09:51:08.443131338 +0000 UTC m=+0.082934940 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:51:08 localhost podman[285947]: 2025-12-02 09:51:08.450530457 +0000 UTC m=+0.090334049 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 2 04:51:08 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:51:09 localhost podman[285965]: 2025-12-02 09:51:09.442183966 +0000 UTC m=+0.080373823 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:51:09 localhost systemd[1]: tmp-crun.S6i61e.mount: Deactivated successfully. Dec 2 04:51:09 localhost podman[285966]: 2025-12-02 09:51:09.505580812 +0000 UTC m=+0.138909738 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true) Dec 2 04:51:09 localhost podman[285965]: 2025-12-02 09:51:09.53317349 +0000 UTC m=+0.171363337 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:51:09 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:51:09 localhost podman[285966]: 2025-12-02 09:51:09.572956735 +0000 UTC m=+0.206285661 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:51:09 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:51:09 localhost sshd[286013]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:51:09 localhost systemd-logind[757]: New session 63 of user tripleo-admin. Dec 2 04:51:09 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 04:51:09 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 04:51:09 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 04:51:09 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 04:51:10 localhost systemd[286017]: Queued start job for default target Main User Target. Dec 2 04:51:10 localhost systemd[286017]: Created slice User Application Slice. Dec 2 04:51:10 localhost systemd[286017]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:51:10 localhost systemd-journald[47611]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 2 04:51:10 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 04:51:10 localhost systemd[286017]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:51:10 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:51:10 localhost systemd[286017]: Reached target Paths. Dec 2 04:51:10 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:51:10 localhost systemd[286017]: Reached target Timers. Dec 2 04:51:10 localhost systemd[286017]: Starting D-Bus User Message Bus Socket... Dec 2 04:51:10 localhost systemd[286017]: Starting Create User's Volatile Files and Directories... Dec 2 04:51:10 localhost systemd[286017]: Listening on D-Bus User Message Bus Socket. Dec 2 04:51:10 localhost systemd[286017]: Reached target Sockets. Dec 2 04:51:10 localhost systemd[286017]: Finished Create User's Volatile Files and Directories. Dec 2 04:51:10 localhost systemd[286017]: Reached target Basic System. Dec 2 04:51:10 localhost systemd[286017]: Reached target Main User Target. Dec 2 04:51:10 localhost systemd[286017]: Startup finished in 156ms. Dec 2 04:51:10 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 04:51:10 localhost systemd[1]: Started Session 63 of User tripleo-admin. Dec 2 04:51:10 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 04:51:10 localhost python3[286160]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:51:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:48:4f:22 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3480 DF PROTO=TCP SPT=54006 DPT=9102 SEQ=2645994212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A47A8B4640000000001030307) Dec 2 04:51:11 localhost nova_compute[281854]: 2025-12-02 09:51:11.315 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:11 localhost python3[286304]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 2 04:51:11 localhost systemd[1]: Stopping Netfilter Tables... Dec 2 04:51:11 localhost systemd[1]: nftables.service: Deactivated successfully. Dec 2 04:51:11 localhost systemd[1]: Stopped Netfilter Tables. Dec 2 04:51:11 localhost systemd[1]: Starting Netfilter Tables... Dec 2 04:51:12 localhost systemd[1]: Finished Netfilter Tables. Dec 2 04:51:12 localhost nova_compute[281854]: 2025-12-02 09:51:12.541 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:16 localhost nova_compute[281854]: 2025-12-02 09:51:16.347 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:17 localhost nova_compute[281854]: 2025-12-02 09:51:17.582 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:51:21 localhost nova_compute[281854]: 2025-12-02 09:51:21.405 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:21 localhost podman[286451]: 2025-12-02 09:51:21.480985491 +0000 UTC m=+0.064712803 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:51:21 localhost podman[286451]: 2025-12-02 09:51:21.489781906 +0000 UTC m=+0.073509208 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible) Dec 2 04:51:21 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:51:22 localhost nova_compute[281854]: 2025-12-02 09:51:22.616 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:51:25 localhost podman[286524]: 2025-12-02 09:51:25.438815221 +0000 UTC m=+0.074948717 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:51:25 localhost podman[286524]: 2025-12-02 09:51:25.472057301 +0000 UTC m=+0.108190807 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Dec 2 04:51:25 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:51:26 localhost nova_compute[281854]: 2025-12-02 09:51:26.409 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:27 localhost nova_compute[281854]: 2025-12-02 09:51:27.637 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:28 localhost podman[286622]: Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.149678722 +0000 UTC m=+0.084877064 container create a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, name=rhceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:51:28 localhost systemd[1]: Started libpod-conmon-a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca.scope. Dec 2 04:51:28 localhost systemd[1]: Started libcrun container. Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.115472505 +0000 UTC m=+0.050670837 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.22884525 +0000 UTC m=+0.164043572 container init a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph) Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.243852651 +0000 UTC m=+0.179050983 container start a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.244123218 +0000 UTC m=+0.179321550 container attach a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Dec 2 04:51:28 localhost zen_euclid[286637]: 167 167 Dec 2 04:51:28 localhost systemd[1]: libpod-a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca.scope: Deactivated successfully. Dec 2 04:51:28 localhost podman[286622]: 2025-12-02 09:51:28.248102865 +0000 UTC m=+0.183301237 container died a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, name=rhceph, GIT_BRANCH=main) Dec 2 04:51:28 localhost podman[286644]: 2025-12-02 09:51:28.354812551 +0000 UTC m=+0.092428324 container remove a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_euclid, version=7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, name=rhceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64) Dec 2 04:51:28 localhost systemd[1]: libpod-conmon-a5a41c85c45658c818e59584bdf4810c91db2325e22f77aff7095b6a364d80ca.scope: Deactivated successfully. Dec 2 04:51:28 localhost systemd[1]: Reloading. Dec 2 04:51:28 localhost systemd-sysv-generator[286689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:51:28 localhost systemd-rc-local-generator[286686]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: var-lib-containers-storage-overlay-4995067a505adc822eda7872cf00cb503647bd3a98f04b7ef8ac1961b1ea9a55-merged.mount: Deactivated successfully. Dec 2 04:51:28 localhost systemd[1]: Reloading. Dec 2 04:51:28 localhost systemd-rc-local-generator[286725]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:51:28 localhost systemd-sysv-generator[286730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:51:29 localhost systemd[1]: Starting Ceph mds.mds.np0005541913.maexpe for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:51:29 localhost podman[286790]: Dec 2 04:51:29 localhost podman[286790]: 2025-12-02 09:51:29.463259496 +0000 UTC m=+0.052829995 container create 588592dabdb0941a0740c1cf0ce8f9e94be3ea185621923b001e2ef4eee03fa9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541913-maexpe, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-type=git, version=7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:51:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e9ca40f5b00254365b2f578b608dfdd4e3b1686706b09cb19c7f704b9eba89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e9ca40f5b00254365b2f578b608dfdd4e3b1686706b09cb19c7f704b9eba89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e9ca40f5b00254365b2f578b608dfdd4e3b1686706b09cb19c7f704b9eba89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e9ca40f5b00254365b2f578b608dfdd4e3b1686706b09cb19c7f704b9eba89/merged/var/lib/ceph/mds/ceph-mds.np0005541913.maexpe supports timestamps until 2038 (0x7fffffff) Dec 2 04:51:29 localhost podman[286790]: 2025-12-02 09:51:29.520710823 +0000 UTC m=+0.110281302 container init 588592dabdb0941a0740c1cf0ce8f9e94be3ea185621923b001e2ef4eee03fa9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541913-maexpe, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph) Dec 2 04:51:29 localhost podman[286790]: 2025-12-02 09:51:29.527812054 +0000 UTC m=+0.117382543 container start 588592dabdb0941a0740c1cf0ce8f9e94be3ea185621923b001e2ef4eee03fa9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541913-maexpe, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:51:29 localhost bash[286790]: 588592dabdb0941a0740c1cf0ce8f9e94be3ea185621923b001e2ef4eee03fa9 Dec 2 04:51:29 localhost podman[286790]: 2025-12-02 09:51:29.438130974 +0000 UTC m=+0.027701463 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:51:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:51:29 localhost systemd[1]: Started Ceph mds.mds.np0005541913.maexpe for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:51:29 localhost ceph-mds[286809]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:51:29 localhost ceph-mds[286809]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Dec 2 04:51:29 localhost ceph-mds[286809]: main not setting numa affinity Dec 2 04:51:29 localhost ceph-mds[286809]: pidfile_write: ignore empty --pid-file Dec 2 04:51:29 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541913-maexpe[286805]: starting mds.mds.np0005541913.maexpe at Dec 2 04:51:29 localhost ceph-mds[286809]: mds.mds.np0005541913.maexpe Updating MDS map to version 7 from mon.1 Dec 2 04:51:29 localhost podman[286810]: 2025-12-02 09:51:29.61701708 +0000 UTC m=+0.065355700 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:51:29 localhost podman[286810]: 2025-12-02 09:51:29.632190407 +0000 UTC m=+0.080529007 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 04:51:29 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:51:29 localhost ceph-mds[286809]: mds.mds.np0005541913.maexpe Updating MDS map to version 8 from mon.1 Dec 2 04:51:29 localhost ceph-mds[286809]: mds.mds.np0005541913.maexpe Monitors have assigned me to become a standby. Dec 2 04:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:51:31 localhost nova_compute[281854]: 2025-12-02 09:51:31.450 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:31 localhost podman[286848]: 2025-12-02 09:51:31.46096941 +0000 UTC m=+0.096395882 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:51:31 localhost podman[286848]: 2025-12-02 09:51:31.473958857 +0000 UTC m=+0.109385249 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:51:31 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:51:32 localhost nova_compute[281854]: 2025-12-02 09:51:32.676 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:33 localhost podman[286998]: 2025-12-02 09:51:33.588377474 +0000 UTC m=+0.099067572 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux ) Dec 2 04:51:33 localhost podman[286998]: 2025-12-02 09:51:33.675012912 +0000 UTC m=+0.185703000 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, ceph=True, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 2 04:51:34 localhost openstack_network_exporter[242845]: ERROR 09:51:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:51:34 localhost openstack_network_exporter[242845]: ERROR 09:51:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:34 localhost openstack_network_exporter[242845]: ERROR 09:51:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:51:34 localhost openstack_network_exporter[242845]: ERROR 09:51:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:51:34 localhost openstack_network_exporter[242845]: Dec 2 04:51:34 localhost openstack_network_exporter[242845]: ERROR 09:51:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:51:34 localhost openstack_network_exporter[242845]: Dec 2 04:51:34 localhost systemd[1]: session-62.scope: Deactivated successfully. Dec 2 04:51:34 localhost systemd-logind[757]: Session 62 logged out. Waiting for processes to exit. Dec 2 04:51:34 localhost systemd-logind[757]: Removed session 62. Dec 2 04:51:36 localhost podman[240799]: time="2025-12-02T09:51:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:51:36 localhost podman[240799]: @ - - [02/Dec/2025:09:51:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149882 "" "Go-http-client/1.1" Dec 2 04:51:36 localhost podman[240799]: @ - - [02/Dec/2025:09:51:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17717 "" "Go-http-client/1.1" Dec 2 04:51:36 localhost nova_compute[281854]: 2025-12-02 09:51:36.452 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:37 localhost nova_compute[281854]: 2025-12-02 09:51:37.711 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:51:39 localhost podman[287120]: 2025-12-02 09:51:39.155787448 +0000 UTC m=+0.073861378 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:51:39 localhost podman[287120]: 2025-12-02 09:51:39.191098572 +0000 UTC m=+0.109172532 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:51:39 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:51:40 localhost podman[287139]: 2025-12-02 09:51:40.429863324 +0000 UTC m=+0.074213907 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:51:40 localhost podman[287139]: 2025-12-02 09:51:40.439149223 +0000 UTC m=+0.083499786 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:51:40 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:51:40 localhost podman[287140]: 2025-12-02 09:51:40.519228076 +0000 UTC m=+0.161676588 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 2 04:51:40 localhost podman[287140]: 2025-12-02 09:51:40.547809561 +0000 UTC m=+0.190258133 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller) Dec 2 04:51:40 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:51:41 localhost nova_compute[281854]: 2025-12-02 09:51:41.454 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:42 localhost nova_compute[281854]: 2025-12-02 09:51:42.750 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:42 localhost nova_compute[281854]: 2025-12-02 09:51:42.935 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:42 localhost nova_compute[281854]: 2025-12-02 09:51:42.959 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:42 localhost nova_compute[281854]: 2025-12-02 09:51:42.959 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:43 localhost nova_compute[281854]: 2025-12-02 09:51:43.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:43 localhost nova_compute[281854]: 2025-12-02 09:51:43.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:43 localhost nova_compute[281854]: 2025-12-02 09:51:43.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:51:44 localhost nova_compute[281854]: 2025-12-02 09:51:44.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:44 localhost nova_compute[281854]: 2025-12-02 09:51:44.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:44 localhost nova_compute[281854]: 2025-12-02 09:51:44.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:51:44 localhost nova_compute[281854]: 2025-12-02 09:51:44.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.218 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.218 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.219 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.219 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.637 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.657 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.657 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.658 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.674 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.675 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.675 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.676 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:51:45 localhost nova_compute[281854]: 2025-12-02 09:51:45.676 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.107 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.262 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.263 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.456 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.462 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.464 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12277MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.465 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.465 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.550 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.551 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.551 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:51:46 localhost nova_compute[281854]: 2025-12-02 09:51:46.594 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.047 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.056 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.211 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.213 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.214 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.384 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.385 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:51:47 localhost nova_compute[281854]: 2025-12-02 09:51:47.779 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:51 localhost nova_compute[281854]: 2025-12-02 09:51:51.458 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:51:52 localhost podman[287229]: 2025-12-02 09:51:52.428914008 +0000 UTC m=+0.069487101 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 04:51:52 localhost podman[287229]: 2025-12-02 09:51:52.438715209 +0000 UTC m=+0.079288362 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 04:51:52 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:51:52 localhost nova_compute[281854]: 2025-12-02 09:51:52.814 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:51:56 localhost podman[287248]: 2025-12-02 09:51:56.446829935 +0000 UTC m=+0.087228245 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:51:56 localhost nova_compute[281854]: 2025-12-02 09:51:56.495 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:51:56 localhost podman[287248]: 2025-12-02 09:51:56.499188697 +0000 UTC m=+0.139586967 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:51:56 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:51:57 localhost nova_compute[281854]: 2025-12-02 09:51:57.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:52:00 localhost podman[287266]: 2025-12-02 09:52:00.445548909 +0000 UTC m=+0.089768963 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, config_id=edpm, distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc.) Dec 2 04:52:00 localhost podman[287266]: 2025-12-02 09:52:00.485094188 +0000 UTC m=+0.129314232 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 2 04:52:00 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:52:01 localhost nova_compute[281854]: 2025-12-02 09:52:01.498 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:52:02 localhost podman[287287]: 2025-12-02 09:52:02.446249823 +0000 UTC m=+0.088037776 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:52:02 localhost podman[287287]: 2025-12-02 09:52:02.45099081 +0000 UTC m=+0.092778783 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:52:02 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:52:02 localhost nova_compute[281854]: 2025-12-02 09:52:02.821 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:52:03.036 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:52:03.036 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:52:03.037 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:04 localhost openstack_network_exporter[242845]: ERROR 09:52:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:04 localhost openstack_network_exporter[242845]: ERROR 09:52:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:04 localhost openstack_network_exporter[242845]: ERROR 09:52:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:52:04 localhost openstack_network_exporter[242845]: ERROR 09:52:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:52:04 localhost openstack_network_exporter[242845]: Dec 2 04:52:04 localhost openstack_network_exporter[242845]: ERROR 09:52:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:52:04 localhost openstack_network_exporter[242845]: Dec 2 04:52:06 localhost podman[240799]: time="2025-12-02T09:52:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:52:06 localhost podman[240799]: @ - - [02/Dec/2025:09:52:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149882 "" "Go-http-client/1.1" Dec 2 04:52:06 localhost podman[240799]: @ - - [02/Dec/2025:09:52:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17721 "" "Go-http-client/1.1" Dec 2 04:52:06 localhost nova_compute[281854]: 2025-12-02 09:52:06.502 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:07 localhost nova_compute[281854]: 2025-12-02 09:52:07.822 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:52:09 localhost systemd[1]: tmp-crun.Bavsoi.mount: Deactivated successfully. Dec 2 04:52:09 localhost podman[287328]: 2025-12-02 09:52:09.393003644 +0000 UTC m=+0.083722482 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd) Dec 2 04:52:09 localhost podman[287328]: 2025-12-02 09:52:09.431538355 +0000 UTC m=+0.122257153 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 04:52:09 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:52:11 localhost podman[287350]: 2025-12-02 09:52:11.455902393 +0000 UTC m=+0.091049058 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:52:11 localhost nova_compute[281854]: 2025-12-02 09:52:11.506 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:11 localhost systemd[1]: tmp-crun.YN0ypF.mount: Deactivated successfully. Dec 2 04:52:11 localhost podman[287350]: 2025-12-02 09:52:11.522856574 +0000 UTC m=+0.158003229 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:52:11 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:52:11 localhost podman[287349]: 2025-12-02 09:52:11.527639352 +0000 UTC m=+0.164762191 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:52:11 localhost podman[287349]: 2025-12-02 09:52:11.612244126 +0000 UTC m=+0.249366955 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:52:11 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:52:11 localhost systemd[1]: session-63.scope: Deactivated successfully. Dec 2 04:52:11 localhost systemd[1]: session-63.scope: Consumed 1.322s CPU time. Dec 2 04:52:11 localhost systemd-logind[757]: Session 63 logged out. Waiting for processes to exit. Dec 2 04:52:11 localhost systemd-logind[757]: Removed session 63. Dec 2 04:52:12 localhost nova_compute[281854]: 2025-12-02 09:52:12.853 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.101 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.102 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.106 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f68abb7a-6d65-46b4-ae25-ba06ad2826c5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.102512', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '958ae40c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '96cc0523df0953e21713103908ad541f6d618aa9f201a876b55ef60d1da16dea'}]}, 'timestamp': '2025-12-02 09:52:16.107525', '_unique_id': '1eebfdd7c28c4348a82991a22094dce9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.109 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.110 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.122 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.122 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '957d602c-dab8-4f17-a4fe-d71d9d767324', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.110852', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '958d3234-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': '7bee86e91c787717a78d06b25765919a507e826d949be49596c2eea407b4ea60'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.110852', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '958d47ce-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': '31502fb1d8622b1e2f840cfc11dc43227d9a77e83370d4f423a63f27d00e38d5'}]}, 'timestamp': '2025-12-02 09:52:16.123096', '_unique_id': 'b612ddedbe8d410ab72f7ae4fb168123'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.124 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.125 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.125 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0f7c3830-23cb-45c6-be9d-891e01ddf9bc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.125689', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '958dbfc4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '97e02d78b1bbc3374973c9b9c41d9d3e14ac230b07919bdb519400041ef67495'}]}, 'timestamp': '2025-12-02 09:52:16.126193', '_unique_id': '0ce50e725c654695a016287fba175c7b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.127 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.128 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.128 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6c55467e-26fd-4f39-94bf-300425f2ba4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.128446', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '958e2d56-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '406edc9addeaafa1300c7fc873da141031354eb67b89210926adad222563ee49'}]}, 'timestamp': '2025-12-02 09:52:16.128997', '_unique_id': '962d95a9bc404f99bd206acd2d546523'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.130 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.131 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.131 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.131 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.161 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.162 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2f43817e-9587-431b-8321-eb96e79295ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.131650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9593423c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': 'e48f89f65265954f13a3289cb4ef4d15dd09a75c415ccc4ed6eb31aa8278cb0c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.131650', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '959359ac-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': 'c5a3ba69bcde6537a8947b9c6808ce88c2eaf3edf5f36a1704af433c17928424'}]}, 'timestamp': '2025-12-02 09:52:16.162954', '_unique_id': 'b9b163d3f9a644b3926b89dc3949db76'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.165 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.165 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '63df1fd9-413f-4d6f-9a72-5b1e128c4178', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.165819', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '9593df80-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': 'd03bf24facf1a5a7f818e827edc7ca9aea0c174fcd4069c6cb3d4a226d397c0a'}]}, 'timestamp': '2025-12-02 09:52:16.166323', '_unique_id': '5e559a76283740cfaf0c4fc739eb719e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f85f63bf-2e46-40af-baa6-9fc44bdba892', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.168691', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '95944eca-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '89dae6af691f27d0f4add2971b91ca18ddc716051cfc16a9b5c794b3ff6e6b0e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.168691', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '95946126-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '3d8dbeb966b3a50b4de7b7469ff5c25b6f0daa0f64d7976ca5be1541f26c4228'}]}, 'timestamp': '2025-12-02 09:52:16.169644', '_unique_id': '5cee180819b0404f8a2680239b06a38b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.171 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.172 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.172 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97fac221-566c-4379-9207-7177dfd3997e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.171943', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9594cfee-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '2bae40fc2f8f1861bd699eef94cdcab385ebdfe8cb20247efe4a0f6a70fefa8d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.171943', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9594e1f0-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '24677901739101c8b11916c452189314a6a18123bba44546adda812233880aea'}]}, 'timestamp': '2025-12-02 09:52:16.172907', '_unique_id': '33cccf8512324cf99ec600b41805dac4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.175 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.175 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.175 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb89c13a-1cfd-4130-ae83-2d4d1ba6562f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.175381', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '95955586-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': '987ba902dd2eeb10531f834142af5384520fc297e79b1342c90abcf7148d1ba4'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.175381', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '959567a6-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': 'a07287f66cc1ab3cbe23172332b0ac1e32e40247ab61d19155193dfed635f92d'}]}, 'timestamp': '2025-12-02 09:52:16.176321', '_unique_id': '11cca913134f42e2a07d43d906dfb404'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.178 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.178 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.178 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.179 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78f55d7c-9f7b-474e-bb7c-81d202b7330f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.178780', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9595d89e-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': 'ddf0aaf40afb16c9f5a740758ec310b058a5df5b5ea63f7c8744887962f1a530'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.178780', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9595e85c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': 'e44b3eba060aa07dd8688f09cd4d377503ebaebfa2abf14f3b6defcd059911d6'}]}, 'timestamp': '2025-12-02 09:52:16.179652', '_unique_id': '8fbce8031c5646dcb0249f1ba2faa9ed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.181 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'de3afa04-0fba-46bc-945d-15364ed5dbae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.182018', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '95965724-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '165c7c9c974c9d20e94499b6e051dc47a76268ac45c2df5ab3b3776259d0d078'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.182018', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '95966868-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '5557dc59330d7df857fd907fd75d24620caf9c5169cb5d0da030f1fbd37569fd'}]}, 'timestamp': '2025-12-02 09:52:16.182898', '_unique_id': '332f450cca0144889858d62eb36a10a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.184 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bff2ba62-0237-4b44-8e6c-f9fc4a9487a8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:52:16.185099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '959933d6-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.419716882, 'message_signature': '2c877ae6d793b8a73bde3ca77f9b057f09f25528420fb59d2210dbec93fb1591'}]}, 'timestamp': '2025-12-02 09:52:16.201228', '_unique_id': 'd410d1e50b064737bb0866101cb86b73'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ecdfa490-5949-4548-8879-3efe7f754933', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.203523', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9599a226-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': 'f91031e95e355f6eabb6065a261cf7f9b421f6af90e00d3baea95a786987004a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.203523', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9599b2b6-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.350742317, 'message_signature': '3891d3e3cca8a84887a38a78b354b73e8f4741a6d4f8d60f503313c17a65761b'}]}, 'timestamp': '2025-12-02 09:52:16.204457', '_unique_id': '7397defa58b54f5f841af961545b3f09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36e9c54d-a0ae-4c58-87a3-1b7eef4715dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.206822', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959a1ff8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '01a73dc9f3388ee0111daef93d5d7ce7e9c747409711a1e9008a08e69000c7ec'}]}, 'timestamp': '2025-12-02 09:52:16.207282', '_unique_id': '7e9289fd34db40d9a9d55c71440905a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0261bdac-b1bf-43f1-a8af-725867a7335f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.209462', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959a88a8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '930998623cf5e98c166f19676b0080244059b65c3451bdfeffe687ca9b2f00a7'}]}, 'timestamp': '2025-12-02 09:52:16.209967', '_unique_id': 'a03a263ef3b84d9c9e3329450e6ae87b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.212 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 12730000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a39441cf-3291-4b31-a1c1-058f4b0f42d0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12730000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:52:16.212197', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '959af1a8-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.419716882, 'message_signature': 'fcce274d67905f64394c5214e06710b64ffa199d9e1a91f19b344c9efc5db0ee'}]}, 'timestamp': '2025-12-02 09:52:16.212676', '_unique_id': 'f72d2abaf4404761ba4c32cdc4c45c8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f5c6705-1cce-4e5e-9217-25f3c9e92cc6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.214976', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959b5f44-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': 'b020445a9bee67893fa9fbcac5f86e63aa1a3a522aa5f2cb5cb0ddcd75fb13f0'}]}, 'timestamp': '2025-12-02 09:52:16.215504', '_unique_id': '00e11962e3254311bffd89c5cf4e89f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.217 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d08797b-a11e-4ac5-b30b-5883788a1589', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.217720', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959bc9de-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '026b27fd7ee3f97987d19e6c19746bef7b7bbb3bcd54d77a459818a6e1ebc4f1'}]}, 'timestamp': '2025-12-02 09:52:16.218195', '_unique_id': '7bc387e0c5614c55bae590784d737a17'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '651519ca-19a0-4769-8d0b-7e65d9383b8b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.220697', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959c3ef0-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': '45c34360727ef1e6fc9624f4cfc3be00396d1f20cb1f31ed1eb134d94b6b2817'}]}, 'timestamp': '2025-12-02 09:52:16.221185', '_unique_id': '7fe343cbda4f4d539d4f47e25e4244b5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.223 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a9086512-f357-486d-a494-669bc2a840cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:52:16.223351', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '959ca836-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.321565376, 'message_signature': 'fd6b8bcce6fa97e9a3c359ff7300bba9a2ea4747bdb2cd8974a66fbc0e9d4aea'}]}, 'timestamp': '2025-12-02 09:52:16.223887', '_unique_id': '8fbd11e8f97e45329157bcbf4d70811c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.226 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '667c5d5b-701e-42bb-94a0-62de42dcd8cc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:52:16.226189', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '959d14ec-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': 'f956a5907a324d5e014b42ddd747677865209199d7dcc1f989854e8ae46a2a9e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:52:16.226189', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '959d26da-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11298.32996604, 'message_signature': '5d447f19dd97449d45663ea1fcf036a18528880acf9d78042caf5338baa25245'}]}, 'timestamp': '2025-12-02 09:52:16.227128', '_unique_id': 'abd5a8a4855b474da8a096a5f8489d97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:52:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:52:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:52:16 localhost nova_compute[281854]: 2025-12-02 09:52:16.531 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:17 localhost nova_compute[281854]: 2025-12-02 09:52:17.896 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:21 localhost nova_compute[281854]: 2025-12-02 09:52:21.568 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:22 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 04:52:22 localhost systemd[286017]: Activating special unit Exit the Session... Dec 2 04:52:22 localhost systemd[286017]: Stopped target Main User Target. Dec 2 04:52:22 localhost systemd[286017]: Stopped target Basic System. Dec 2 04:52:22 localhost systemd[286017]: Stopped target Paths. Dec 2 04:52:22 localhost systemd[286017]: Stopped target Sockets. Dec 2 04:52:22 localhost systemd[286017]: Stopped target Timers. Dec 2 04:52:22 localhost systemd[286017]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:52:22 localhost systemd[286017]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:52:22 localhost systemd[286017]: Closed D-Bus User Message Bus Socket. Dec 2 04:52:22 localhost systemd[286017]: Stopped Create User's Volatile Files and Directories. Dec 2 04:52:22 localhost systemd[286017]: Removed slice User Application Slice. Dec 2 04:52:22 localhost systemd[286017]: Reached target Shutdown. Dec 2 04:52:22 localhost systemd[286017]: Finished Exit the Session. Dec 2 04:52:22 localhost systemd[286017]: Reached target Exit the Session. Dec 2 04:52:22 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 04:52:22 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 04:52:22 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 04:52:22 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 04:52:22 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 04:52:22 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 04:52:22 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 04:52:22 localhost systemd[1]: user-1003.slice: Consumed 1.672s CPU time. Dec 2 04:52:22 localhost nova_compute[281854]: 2025-12-02 09:52:22.919 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:52:23 localhost podman[287396]: 2025-12-02 09:52:23.016365727 +0000 UTC m=+0.072364798 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:52:23 localhost podman[287396]: 2025-12-02 09:52:23.029966501 +0000 UTC m=+0.085965562 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 04:52:23 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:52:26 localhost nova_compute[281854]: 2025-12-02 09:52:26.571 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:52:27 localhost podman[287501]: 2025-12-02 09:52:27.441704808 +0000 UTC m=+0.079197670 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 04:52:27 localhost podman[287501]: 2025-12-02 09:52:27.45298343 +0000 UTC m=+0.090476352 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:52:27 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:52:27 localhost nova_compute[281854]: 2025-12-02 09:52:27.958 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:52:31 localhost systemd[1]: tmp-crun.W410F3.mount: Deactivated successfully. Dec 2 04:52:31 localhost podman[287555]: 2025-12-02 09:52:31.448216341 +0000 UTC m=+0.079383795 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm) Dec 2 04:52:31 localhost podman[287555]: 2025-12-02 09:52:31.465197426 +0000 UTC m=+0.096364860 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Dec 2 04:52:31 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:52:31 localhost nova_compute[281854]: 2025-12-02 09:52:31.611 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:32 localhost nova_compute[281854]: 2025-12-02 09:52:32.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:52:33 localhost systemd[1]: tmp-crun.4lXzKy.mount: Deactivated successfully. Dec 2 04:52:33 localhost podman[287575]: 2025-12-02 09:52:33.461956894 +0000 UTC m=+0.094046408 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:52:33 localhost podman[287575]: 2025-12-02 09:52:33.500789654 +0000 UTC m=+0.132879138 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:52:33 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:52:34 localhost openstack_network_exporter[242845]: ERROR 09:52:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:34 localhost openstack_network_exporter[242845]: ERROR 09:52:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:52:34 localhost openstack_network_exporter[242845]: ERROR 09:52:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:52:34 localhost openstack_network_exporter[242845]: ERROR 09:52:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:52:34 localhost openstack_network_exporter[242845]: Dec 2 04:52:34 localhost openstack_network_exporter[242845]: ERROR 09:52:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:52:34 localhost openstack_network_exporter[242845]: Dec 2 04:52:36 localhost podman[240799]: time="2025-12-02T09:52:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:52:36 localhost podman[240799]: @ - - [02/Dec/2025:09:52:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149882 "" "Go-http-client/1.1" Dec 2 04:52:36 localhost podman[240799]: @ - - [02/Dec/2025:09:52:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17721 "" "Go-http-client/1.1" Dec 2 04:52:36 localhost nova_compute[281854]: 2025-12-02 09:52:36.655 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:37 localhost nova_compute[281854]: 2025-12-02 09:52:37.968 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:52:40 localhost podman[287598]: 2025-12-02 09:52:40.429401668 +0000 UTC m=+0.073852387 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:52:40 localhost podman[287598]: 2025-12-02 09:52:40.444098631 +0000 UTC m=+0.088549400 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:52:40 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.845 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.846 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.846 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:52:40 localhost nova_compute[281854]: 2025-12-02 09:52:40.861 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:41 localhost nova_compute[281854]: 2025-12-02 09:52:41.690 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:41 localhost nova_compute[281854]: 2025-12-02 09:52:41.872 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:52:42 localhost podman[287617]: 2025-12-02 09:52:42.423494704 +0000 UTC m=+0.067543668 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:52:42 localhost podman[287617]: 2025-12-02 09:52:42.435893656 +0000 UTC m=+0.079942630 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:52:42 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:52:42 localhost podman[287618]: 2025-12-02 09:52:42.512392694 +0000 UTC m=+0.153662424 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 04:52:42 localhost podman[287618]: 2025-12-02 09:52:42.573244572 +0000 UTC m=+0.214514392 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 04:52:42 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:52:42 localhost nova_compute[281854]: 2025-12-02 09:52:42.970 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:43 localhost nova_compute[281854]: 2025-12-02 09:52:43.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:43 localhost nova_compute[281854]: 2025-12-02 09:52:43.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:52:44 localhost nova_compute[281854]: 2025-12-02 09:52:44.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:44 localhost nova_compute[281854]: 2025-12-02 09:52:44.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:52:44 localhost nova_compute[281854]: 2025-12-02 09:52:44.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.237 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.238 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.238 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.238 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.728 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.745 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.745 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.746 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.747 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.851 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.851 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.852 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.852 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:52:45 localhost nova_compute[281854]: 2025-12-02 09:52:45.852 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.292 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.366 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.367 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.571 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.573 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=12275MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.573 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.573 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.685 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.685 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.686 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.693 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.748 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.823 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.824 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.844 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.864 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:52:46 localhost nova_compute[281854]: 2025-12-02 09:52:46.902 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.349 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.354 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.367 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.368 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.369 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:52:47 localhost nova_compute[281854]: 2025-12-02 09:52:47.973 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:51 localhost nova_compute[281854]: 2025-12-02 09:52:51.733 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:52 localhost nova_compute[281854]: 2025-12-02 09:52:52.976 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:52:53 localhost podman[287727]: 2025-12-02 09:52:53.446285392 +0000 UTC m=+0.088349016 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 2 04:52:53 localhost podman[287727]: 2025-12-02 09:52:53.455914929 +0000 UTC m=+0.097978503 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:52:53 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:52:56 localhost nova_compute[281854]: 2025-12-02 09:52:56.774 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:58 localhost nova_compute[281854]: 2025-12-02 09:52:58.000 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:52:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:52:58 localhost systemd[1]: tmp-crun.Xy3x27.mount: Deactivated successfully. Dec 2 04:52:58 localhost podman[287781]: 2025-12-02 09:52:58.446429968 +0000 UTC m=+0.087155085 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:52:58 localhost podman[287781]: 2025-12-02 09:52:58.481148087 +0000 UTC m=+0.121873134 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:52:58 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:52:59 localhost podman[287875]: Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.303145725 +0000 UTC m=+0.058948169 container create 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:52:59 localhost systemd[1]: Started libpod-conmon-26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48.scope. Dec 2 04:52:59 localhost systemd[1]: Started libcrun container. Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.364368193 +0000 UTC m=+0.120170637 container init 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, distribution-scope=public, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, vcs-type=git) Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.271809526 +0000 UTC m=+0.027612020 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.374535636 +0000 UTC m=+0.130338050 container start 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public) Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.374750412 +0000 UTC m=+0.130552896 container attach 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, release=1763362218, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True) Dec 2 04:52:59 localhost nostalgic_lovelace[287891]: 167 167 Dec 2 04:52:59 localhost systemd[1]: libpod-26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48.scope: Deactivated successfully. Dec 2 04:52:59 localhost podman[287875]: 2025-12-02 09:52:59.379050787 +0000 UTC m=+0.134853221 container died 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, release=1763362218, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, version=7, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:52:59 localhost systemd[1]: var-lib-containers-storage-overlay-07fe789c99b3bce0fc75b062fef4d36fa8bf82031a174529546fd1d578ba69ff-merged.mount: Deactivated successfully. Dec 2 04:52:59 localhost podman[287896]: 2025-12-02 09:52:59.461443182 +0000 UTC m=+0.069220204 container remove 26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, architecture=x86_64) Dec 2 04:52:59 localhost systemd[1]: libpod-conmon-26ed9e67552f8a49a72c11884745c5fb0ecfcee2138bda262a54b323b510dd48.scope: Deactivated successfully. Dec 2 04:52:59 localhost systemd[1]: Reloading. Dec 2 04:52:59 localhost systemd-rc-local-generator[287941]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:52:59 localhost systemd-sysv-generator[287945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:52:59 localhost systemd[1]: Reloading. Dec 2 04:52:59 localhost systemd-sysv-generator[287979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:52:59 localhost systemd-rc-local-generator[287976]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:00 localhost systemd[1]: Starting Ceph mgr.np0005541913.mfesdm for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:53:00 localhost podman[288041]: Dec 2 04:53:00 localhost podman[288041]: 2025-12-02 09:53:00.583258404 +0000 UTC m=+0.078517693 container create e85a1b86b1d305678ab89219478fd7faa55f027101f5d8de4368bedf666c21c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.openshift.expose-services=, version=7, GIT_BRANCH=main, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:53:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0959075e92bb626c58513d9bc964dcc957e5b9ceee4c73af40e41da8eecae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0959075e92bb626c58513d9bc964dcc957e5b9ceee4c73af40e41da8eecae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0959075e92bb626c58513d9bc964dcc957e5b9ceee4c73af40e41da8eecae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0959075e92bb626c58513d9bc964dcc957e5b9ceee4c73af40e41da8eecae7/merged/var/lib/ceph/mgr/ceph-np0005541913.mfesdm supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:00 localhost podman[288041]: 2025-12-02 09:53:00.638167353 +0000 UTC m=+0.133426632 container init e85a1b86b1d305678ab89219478fd7faa55f027101f5d8de4368bedf666c21c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat Ceph Storage 7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, version=7, vendor=Red Hat, Inc., name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:53:00 localhost podman[288041]: 2025-12-02 09:53:00.643704242 +0000 UTC m=+0.138963531 container start e85a1b86b1d305678ab89219478fd7faa55f027101f5d8de4368bedf666c21c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, name=rhceph, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:53:00 localhost bash[288041]: e85a1b86b1d305678ab89219478fd7faa55f027101f5d8de4368bedf666c21c8 Dec 2 04:53:00 localhost podman[288041]: 2025-12-02 09:53:00.551208686 +0000 UTC m=+0.046467995 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:00 localhost systemd[1]: Started Ceph mgr.np0005541913.mfesdm for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:53:00 localhost ceph-mgr[288059]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:53:00 localhost ceph-mgr[288059]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 2 04:53:00 localhost ceph-mgr[288059]: pidfile_write: ignore empty --pid-file Dec 2 04:53:00 localhost ceph-mgr[288059]: mgr[py] Loading python module 'alerts' Dec 2 04:53:00 localhost ceph-mgr[288059]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:53:00 localhost ceph-mgr[288059]: mgr[py] Loading python module 'balancer' Dec 2 04:53:00 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:00.772+0000 7f789d852140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:53:00 localhost ceph-mgr[288059]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:53:00 localhost ceph-mgr[288059]: mgr[py] Loading python module 'cephadm' Dec 2 04:53:00 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:00.839+0000 7f789d852140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:53:01 localhost ceph-mgr[288059]: mgr[py] Loading python module 'crash' Dec 2 04:53:01 localhost ceph-mgr[288059]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:53:01 localhost ceph-mgr[288059]: mgr[py] Loading python module 'dashboard' Dec 2 04:53:01 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:01.468+0000 7f789d852140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:53:01 localhost nova_compute[281854]: 2025-12-02 09:53:01.818 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:01 localhost ceph-mgr[288059]: mgr[py] Loading python module 'devicehealth' Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'diskprediction_local' Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:02.056+0000 7f789d852140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: from numpy import show_config as show_numpy_config Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'influx' Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:02.199+0000 7f789d852140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'insights' Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:02.260+0000 7f789d852140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'iostat' Dec 2 04:53:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'k8sevents' Dec 2 04:53:02 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:02.375+0000 7f789d852140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:53:02 localhost podman[288089]: 2025-12-02 09:53:02.424841199 +0000 UTC m=+0.066616414 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:02 localhost podman[288089]: 2025-12-02 09:53:02.43905967 +0000 UTC m=+0.080834935 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:53:02 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'localpool' Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'mds_autoscaler' Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'mirroring' Dec 2 04:53:02 localhost ceph-mgr[288059]: mgr[py] Loading python module 'nfs' Dec 2 04:53:03 localhost nova_compute[281854]: 2025-12-02 09:53:03.002 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:53:03.037 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:53:03.037 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:53:03.038 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'orchestrator' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.128+0000 7f789d852140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'osd_perf_query' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.272+0000 7f789d852140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'osd_support' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.335+0000 7f789d852140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.390+0000 7f789d852140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'pg_autoscaler' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.458+0000 7f789d852140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'progress' Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'prometheus' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.517+0000 7f789d852140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:53:03 localhost systemd[1]: tmp-crun.cnjDIn.mount: Deactivated successfully. Dec 2 04:53:03 localhost podman[288145]: 2025-12-02 09:53:03.69794011 +0000 UTC m=+0.063164071 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:53:03 localhost podman[288145]: 2025-12-02 09:53:03.733129862 +0000 UTC m=+0.098353873 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:53:03 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.834+0000 7f789d852140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rbd_support' Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:53:03 localhost ceph-mgr[288059]: mgr[py] Loading python module 'restful' Dec 2 04:53:03 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:03.922+0000 7f789d852140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost openstack_network_exporter[242845]: ERROR 09:53:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:53:04 localhost openstack_network_exporter[242845]: ERROR 09:53:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:04 localhost openstack_network_exporter[242845]: ERROR 09:53:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:04 localhost openstack_network_exporter[242845]: ERROR 09:53:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:53:04 localhost openstack_network_exporter[242845]: Dec 2 04:53:04 localhost openstack_network_exporter[242845]: ERROR 09:53:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:53:04 localhost openstack_network_exporter[242845]: Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rgw' Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rook' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:04.319+0000 7f789d852140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Loading python module 'selftest' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:04.853+0000 7f789d852140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost podman[288252]: 2025-12-02 09:53:04.890815064 +0000 UTC m=+0.173040262 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:53:04 localhost ceph-mgr[288059]: mgr[py] Loading python module 'snap_schedule' Dec 2 04:53:04 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:04.957+0000 7f789d852140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost podman[288252]: 2025-12-02 09:53:05.035943199 +0000 UTC m=+0.318168327 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, vendor=Red Hat, Inc.) Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'stats' Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'status' Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'telegraf' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.196+0000 7f789d852140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'telemetry' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.258+0000 7f789d852140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'test_orchestrator' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.398+0000 7f789d852140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'volumes' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.566+0000 7f789d852140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Loading python module 'zabbix' Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.771+0000 7f789d852140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:53:05.830+0000 7f789d852140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:53:05 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503731600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 2 04:53:05 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3096645673 Dec 2 04:53:06 localhost podman[240799]: time="2025-12-02T09:53:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:53:06 localhost podman[240799]: @ - - [02/Dec/2025:09:53:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152018 "" "Go-http-client/1.1" Dec 2 04:53:06 localhost podman[240799]: @ - - [02/Dec/2025:09:53:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18201 "" "Go-http-client/1.1" Dec 2 04:53:06 localhost nova_compute[281854]: 2025-12-02 09:53:06.869 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:08 localhost nova_compute[281854]: 2025-12-02 09:53:08.044 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:53:11 localhost podman[288427]: 2025-12-02 09:53:11.163389895 +0000 UTC m=+0.072365788 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125) Dec 2 04:53:11 localhost podman[288427]: 2025-12-02 09:53:11.173931508 +0000 UTC m=+0.082907401 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS) Dec 2 04:53:11 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:53:11 localhost nova_compute[281854]: 2025-12-02 09:53:11.872 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:53:12 localhost podman[288784]: 2025-12-02 09:53:12.577365687 +0000 UTC m=+0.093229076 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:53:12 localhost podman[288784]: 2025-12-02 09:53:12.614198692 +0000 UTC m=+0.130062061 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:53:12 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:53:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:53:12 localhost systemd[1]: tmp-crun.JkDkmD.mount: Deactivated successfully. Dec 2 04:53:12 localhost podman[288842]: 2025-12-02 09:53:12.74265627 +0000 UTC m=+0.083667220 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 04:53:12 localhost podman[288842]: 2025-12-02 09:53:12.829397932 +0000 UTC m=+0.170408902 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Dec 2 04:53:12 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:53:13 localhost nova_compute[281854]: 2025-12-02 09:53:13.044 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:16 localhost nova_compute[281854]: 2025-12-02 09:53:16.913 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:18 localhost nova_compute[281854]: 2025-12-02 09:53:18.085 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:19 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503730f20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 2 04:53:20 localhost podman[289232]: Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.147007318 +0000 UTC m=+0.119971742 container create 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, version=7, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.06223851 +0000 UTC m=+0.035202874 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:20 localhost systemd[1]: Started libpod-conmon-2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5.scope. Dec 2 04:53:20 localhost systemd[1]: Started libcrun container. Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.224143903 +0000 UTC m=+0.197108277 container init 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container) Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.234162211 +0000 UTC m=+0.207126595 container start 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.23450408 +0000 UTC m=+0.207468444 container attach 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, version=7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 2 04:53:20 localhost blissful_carver[289247]: 167 167 Dec 2 04:53:20 localhost systemd[1]: libpod-2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5.scope: Deactivated successfully. Dec 2 04:53:20 localhost podman[289232]: 2025-12-02 09:53:20.238420225 +0000 UTC m=+0.211384599 container died 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-type=git, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:20 localhost podman[289252]: 2025-12-02 09:53:20.3376447 +0000 UTC m=+0.087633296 container remove 2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_carver, name=rhceph, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1763362218) Dec 2 04:53:20 localhost systemd[1]: libpod-conmon-2af7e1a5ec274f4dd7fe026a3e1fa8d050bbe96f9e795163bd1b95ccfe414ba5.scope: Deactivated successfully. Dec 2 04:53:20 localhost podman[289269]: Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.438813657 +0000 UTC m=+0.069633464 container create 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:53:20 localhost systemd[1]: Started libpod-conmon-9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689.scope. Dec 2 04:53:20 localhost systemd[1]: Started libcrun container. Dec 2 04:53:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84d1774558842f4b09f84ede41a5cf8226db5b4ec68ad4fe3cec4e8b8ff7ad0/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84d1774558842f4b09f84ede41a5cf8226db5b4ec68ad4fe3cec4e8b8ff7ad0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84d1774558842f4b09f84ede41a5cf8226db5b4ec68ad4fe3cec4e8b8ff7ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84d1774558842f4b09f84ede41a5cf8226db5b4ec68ad4fe3cec4e8b8ff7ad0/merged/var/lib/ceph/mon/ceph-np0005541913 supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.4960495 +0000 UTC m=+0.126869307 container init 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.511104792 +0000 UTC m=+0.141924589 container start 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc.) Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.5113776 +0000 UTC m=+0.142197387 container attach 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.415520264 +0000 UTC m=+0.046340101 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:20 localhost systemd[1]: libpod-9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689.scope: Deactivated successfully. Dec 2 04:53:20 localhost podman[289269]: 2025-12-02 09:53:20.593866907 +0000 UTC m=+0.224686714 container died 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Dec 2 04:53:20 localhost podman[289310]: 2025-12-02 09:53:20.67579413 +0000 UTC m=+0.071772812 container remove 9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_wiles, release=1763362218, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:53:20 localhost systemd[1]: libpod-conmon-9e453dc76cbbf1996bcf91ec021b1a1078d8ed75d49edf498e46666457530689.scope: Deactivated successfully. Dec 2 04:53:20 localhost systemd[1]: Reloading. Dec 2 04:53:20 localhost systemd-sysv-generator[289354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:20 localhost systemd-rc-local-generator[289349]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: var-lib-containers-storage-overlay-1687bcfc1bb24bb3e2eb5968a8a8366954a907b3b6f50d31836b44880826458d-merged.mount: Deactivated successfully. Dec 2 04:53:21 localhost systemd[1]: tmp-crun.wB9kTf.mount: Deactivated successfully. Dec 2 04:53:21 localhost systemd[1]: Reloading. Dec 2 04:53:21 localhost systemd-rc-local-generator[289389]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:53:21 localhost systemd-sysv-generator[289395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:53:21 localhost systemd[1]: Starting Ceph mon.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:53:21 localhost podman[289455]: Dec 2 04:53:21 localhost podman[289455]: 2025-12-02 09:53:21.865688394 +0000 UTC m=+0.051778517 container create 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 04:53:21 localhost nova_compute[281854]: 2025-12-02 09:53:21.917 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393e1e54e92e7ce105bdb9ae967dcd71a5af0f60b460340c2a56d8deb0a84a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393e1e54e92e7ce105bdb9ae967dcd71a5af0f60b460340c2a56d8deb0a84a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393e1e54e92e7ce105bdb9ae967dcd71a5af0f60b460340c2a56d8deb0a84a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393e1e54e92e7ce105bdb9ae967dcd71a5af0f60b460340c2a56d8deb0a84a42/merged/var/lib/ceph/mon/ceph-np0005541913 supports timestamps until 2038 (0x7fffffff) Dec 2 04:53:21 localhost podman[289455]: 2025-12-02 09:53:21.934062294 +0000 UTC m=+0.120152427 container init 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4) Dec 2 04:53:21 localhost podman[289455]: 2025-12-02 09:53:21.943484486 +0000 UTC m=+0.129574629 container start 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, RELEASE=main, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=) Dec 2 04:53:21 localhost bash[289455]: 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b Dec 2 04:53:21 localhost podman[289455]: 2025-12-02 09:53:21.844643131 +0000 UTC m=+0.030733284 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:53:21 localhost systemd[1]: Started Ceph mon.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:53:21 localhost ceph-mon[289473]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:53:21 localhost ceph-mon[289473]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 2 04:53:21 localhost ceph-mon[289473]: pidfile_write: ignore empty --pid-file Dec 2 04:53:21 localhost ceph-mon[289473]: load: jerasure load: lrc Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: RocksDB version: 7.9.2 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Git sha 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: DB SUMMARY Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: DB Session ID: OW4D0W92HOAH7R2F6LZX Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: CURRENT file: CURRENT Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: IDENTITY file: IDENTITY Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541913/store.db dir, Total Num: 0, files: Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541913/store.db: 000004.log size: 761 ; Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.error_if_exists: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.create_if_missing: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.paranoid_checks: 1 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.env: 0x561a19c049e0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.fs: PosixFileSystem Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.info_log: 0x561a1ab74d20 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.statistics: (nil) Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.use_fsync: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.max_log_file_size: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.allow_fallocate: 1 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.use_direct_reads: 0 Dec 2 04:53:21 localhost ceph-mon[289473]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.create_missing_column_families: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.db_log_dir: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.wal_dir: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.advise_random_on_open: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.write_buffer_manager: 0x561a1ab85540 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.rate_limiter: (nil) Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.unordered_write: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.row_cache: None Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.wal_filter: None Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.two_write_queues: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.manual_wal_flush: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.wal_compression: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.atomic_flush: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.log_readahead_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.db_host_id: __hostname__ Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_background_jobs: 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_background_compactions: -1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_subcompactions: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_total_wal_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_open_files: -1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bytes_per_sync: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_readahead_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_background_flushes: -1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Compression algorithms supported: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kZSTD supported: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kXpressCompression supported: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kZlibCompression supported: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541913/store.db/MANIFEST-000005 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.merge_operator: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_filter: None Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_filter_factory: None Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.sst_partitioner_factory: None Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561a1ab74980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x561a1ab71350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.write_buffer_size: 33554432 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_write_buffer_number: 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression: NoCompression Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression: Disabled Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.prefix_extractor: nullptr Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.num_levels: 7 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.level: 32767 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.enabled: false Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.arena_block_size: 1048576 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.table_properties_collectors: Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.inplace_update_support: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.bloom_locality: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.max_successive_merges: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.force_consistency_checks: 1 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.ttl: 2592000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enable_blob_files: false Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.min_blob_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_file_size: 268435456 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541913/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669202000538, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669202003028, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669202003158, "job": 1, "event": "recovery_finished"} Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561a1ab98e00 Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: DB pointer 0x561a1ac8e000 Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913 does not exist in monmap, will attempt to join an existing cluster Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:53:22 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1ab71350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 04:53:22 localhost ceph-mon[289473]: using public_addr v2:172.18.0.107:0/0 -> [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] Dec 2 04:53:22 localhost ceph-mon[289473]: starting mon.np0005541913 rank -1 at public addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] at bind addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541913 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing) e4 sync_obtain_latest_monmap Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing) e4 sync_obtain_latest_monmap obtained monmap e4 Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).mds e16 new map Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T08:05:53.424954+0000#012modified#0112025-12-02T09:52:13.505190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26573}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26573 members: 26573#012[mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}] Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mgr to host np0005541912.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mgr to host np0005541913.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mgr to host np0005541914.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Saving service mgr spec with placement label:mgr Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:22 localhost ceph-mon[289473]: Deploying daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:22 localhost ceph-mon[289473]: Deploying daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541909.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541909.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 2 04:53:22 localhost ceph-mon[289473]: Deploying daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541910.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541910.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541911.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541911.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541912.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541912.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541913.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541913.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label mon to host np0005541914.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Added label _admin to host np0005541914.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: Saving service mon spec with placement label:mon Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:22 localhost ceph-mon[289473]: Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:53:22 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:22 localhost ceph-mon[289473]: mon.np0005541913@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Dec 2 04:53:23 localhost nova_compute[281854]: 2025-12-02 09:53:23.088 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:53:24 localhost podman[289512]: 2025-12-02 09:53:24.453900549 +0000 UTC m=+0.095869467 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute) Dec 2 04:53:24 localhost podman[289512]: 2025-12-02 09:53:24.465817418 +0000 UTC m=+0.107786396 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 2 04:53:24 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:53:26 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x5645037311e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 2 04:53:26 localhost nova_compute[281854]: 2025-12-02 09:53:26.962 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:27 localhost ceph-mon[289473]: mon.np0005541913@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:27 localhost ceph-mon[289473]: mon.np0005541913@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:28 localhost nova_compute[281854]: 2025-12-02 09:53:28.132 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:28 localhost ceph-mon[289473]: mon.np0005541913@-1(probing) e5 my rank is now 4 (was -1) Dec 2 04:53:28 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:53:28 localhost ceph-mon[289473]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Dec 2 04:53:28 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:53:29 localhost podman[289531]: 2025-12-02 09:53:29.445778133 +0000 UTC m=+0.085299214 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:53:29 localhost podman[289531]: 2025-12-02 09:53:29.480073101 +0000 UTC m=+0.119594182 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:53:29 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:53:29 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 2 04:53:31 localhost ceph-mon[289473]: Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541909 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914 in quorum (ranks 0,1,2,3) Dec 2 04:53:31 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:53:31 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:31 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:31 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:31 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:31 localhost ceph-mon[289473]: Deploying daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[289473]: mgrc update_daemon_metadata mon.np0005541913 metadata {addrs=[v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541913.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541913.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541909 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3,4) Dec 2 04:53:31 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:53:31 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 2 04:53:31 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503730f20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 2 04:53:31 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:53:31 localhost ceph-mon[289473]: paxos.4).electionLogic(22) init, last seen epoch 22 Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:31 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:32 localhost nova_compute[281854]: 2025-12-02 09:53:32.007 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:53:32 localhost podman[289648]: 2025-12-02 09:53:32.875639706 +0000 UTC m=+0.062375241 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:53:32 localhost podman[289648]: 2025-12-02 09:53:32.888484329 +0000 UTC m=+0.075219854 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.) Dec 2 04:53:32 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:53:33 localhost systemd[1]: tmp-crun.Ur2mDe.mount: Deactivated successfully. Dec 2 04:53:33 localhost podman[289699]: 2025-12-02 09:53:33.075560286 +0000 UTC m=+0.077965328 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=1763362218, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:53:33 localhost nova_compute[281854]: 2025-12-02 09:53:33.172 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:33 localhost podman[289699]: 2025-12-02 09:53:33.236083251 +0000 UTC m=+0.238488303 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 2 04:53:34 localhost openstack_network_exporter[242845]: ERROR 09:53:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:53:34 localhost openstack_network_exporter[242845]: ERROR 09:53:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:34 localhost openstack_network_exporter[242845]: ERROR 09:53:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:53:34 localhost openstack_network_exporter[242845]: ERROR 09:53:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:53:34 localhost openstack_network_exporter[242845]: Dec 2 04:53:34 localhost openstack_network_exporter[242845]: ERROR 09:53:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:53:34 localhost openstack_network_exporter[242845]: Dec 2 04:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:53:34 localhost systemd[1]: tmp-crun.GRMT26.mount: Deactivated successfully. Dec 2 04:53:34 localhost podman[289819]: 2025-12-02 09:53:34.468279817 +0000 UTC m=+0.105483864 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:53:34 localhost podman[289819]: 2025-12-02 09:53:34.475821159 +0000 UTC m=+0.113025206 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:53:34 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:53:36 localhost podman[240799]: time="2025-12-02T09:53:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:53:36 localhost podman[240799]: @ - - [02/Dec/2025:09:53:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:53:36 localhost podman[240799]: @ - - [02/Dec/2025:09:53:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18689 "" "Go-http-client/1.1" Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541913@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541909 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541912 calling monitor election Dec 2 04:53:36 localhost ceph-mon[289473]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4,5) Dec 2 04:53:36 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:53:36 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:36 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost nova_compute[281854]: 2025-12-02 09:53:37.011 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:37 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:38 localhost nova_compute[281854]: 2025-12-02 09:53:38.220 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:38 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:39 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.843556) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219843696, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10010, "num_deletes": 255, "total_data_size": 10633068, "memory_usage": 10933432, "flush_reason": "Manual Compaction"} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219905488, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 9090960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10015, "table_properties": {"data_size": 9037841, "index_size": 28245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250141, "raw_average_key_size": 26, "raw_value_size": 8876134, "raw_average_value_size": 934, "num_data_blocks": 1084, "num_entries": 9501, "num_filter_entries": 9501, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669202, "oldest_key_time": 1764669202, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 62028 microseconds, and 21923 cpu microseconds. Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.905581) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 9090960 bytes OK Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.905643) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.907785) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.907812) EVENT_LOG_v1 {"time_micros": 1764669219907804, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.907835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10563943, prev total WAL file size 10563943, number of live WAL files 2. Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.909858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8877KB) 8(1887B)] Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219909998, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 9092847, "oldest_snapshot_seqno": -1} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9249 keys, 9087051 bytes, temperature: kUnknown Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219979600, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 9087051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9034600, "index_size": 28222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 245334, "raw_average_key_size": 26, "raw_value_size": 8876192, "raw_average_value_size": 959, "num_data_blocks": 1083, "num_entries": 9249, "num_filter_entries": 9249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.979961) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 9087051 bytes Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.981680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.7 rd, 130.7 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.7, 0.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9506, records dropped: 257 output_compression: NoCompression Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.981703) EVENT_LOG_v1 {"time_micros": 1764669219981692, "job": 4, "event": "compaction_finished", "compaction_time_micros": 69550, "compaction_time_cpu_micros": 30272, "output_level": 6, "num_output_files": 1, "total_output_size": 9087051, "num_input_records": 9506, "num_output_records": 9249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219982884, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219982931, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 2 04:53:39 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:53:39.909679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:53:40 localhost ceph-mon[289473]: Reconfiguring mon.np0005541909 (monmap changed)... Dec 2 04:53:40 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541909 on np0005541909.localdomain Dec 2 04:53:40 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:40 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:40 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541909.kfesnk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:53:41 localhost podman[290248]: 2025-12-02 09:53:41.450022965 +0000 UTC m=+0.081875911 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 2 04:53:41 localhost podman[290248]: 2025-12-02 09:53:41.487252223 +0000 UTC m=+0.119105169 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 04:53:41 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:53:41 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541909.kfesnk (monmap changed)... Dec 2 04:53:41 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541909.kfesnk on np0005541909.localdomain Dec 2 04:53:41 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:41 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:41 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541909.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:41 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost nova_compute[281854]: 2025-12-02 09:53:42.065 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:42 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:53:42 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.103:0/3005476938' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:53:42 localhost ceph-mon[289473]: Reconfiguring crash.np0005541909 (monmap changed)... Dec 2 04:53:42 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541909 on np0005541909.localdomain Dec 2 04:53:42 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:42 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:43 localhost nova_compute[281854]: 2025-12-02 09:53:43.261 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:53:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:53:43 localhost podman[290266]: 2025-12-02 09:53:43.457554742 +0000 UTC m=+0.089566338 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 2 04:53:43 localhost podman[290266]: 2025-12-02 09:53:43.50308805 +0000 UTC m=+0.135099666 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:53:43 localhost systemd[1]: tmp-crun.qJ6DeV.mount: Deactivated successfully. Dec 2 04:53:43 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:53:43 localhost podman[290265]: 2025-12-02 09:53:43.51092567 +0000 UTC m=+0.146075290 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:53:43 localhost podman[290265]: 2025-12-02 09:53:43.594207569 +0000 UTC m=+0.229357159 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:53:43 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:53:43 localhost ceph-mon[289473]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:53:43 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:53:43 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:43 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:43 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.369 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.388 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.388 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.388 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:53:44 localhost nova_compute[281854]: 2025-12-02 09:53:44.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:53:44 localhost ceph-mon[289473]: Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:53:44 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' Dec 2 04:53:44 localhost ceph-mon[289473]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.239 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.239 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.240 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.240 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e86 e86: 6 total, 6 up, 6 in Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541909"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541909"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541910"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).mds e16 all = 0 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).mds e16 all = 0 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).mds e16 all = 0 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon).mds e16 all = 1 Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata"} : dispatch Dec 2 04:53:45 localhost systemd[1]: session-25.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-27.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-27.scope: Consumed 3min 25.944s CPU time. Dec 2 04:53:45 localhost systemd[1]: session-26.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-24.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-17.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-21.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-20.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-15.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-23.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-19.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-18.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd[1]: session-22.scope: Deactivated successfully. Dec 2 04:53:45 localhost systemd-logind[757]: Session 17 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 18 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 25 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 27 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 26 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 24 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 22 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 23 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 20 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 19 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 21 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Session 15 logged out. Waiting for processes to exit. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 25. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 27. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 26. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 24. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 17. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 21. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 20. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 15. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 23. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 19. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 18. Dec 2 04:53:45 localhost systemd-logind[757]: Removed session 22. Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} v 0) Dec 2 04:53:45 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.590 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.605 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.605 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:53:45 localhost sshd[290313]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:53:45 localhost systemd-logind[757]: New session 65 of user ceph-admin. Dec 2 04:53:45 localhost systemd[1]: Started Session 65 of User ceph-admin. Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:45 localhost nova_compute[281854]: 2025-12-02 09:53:45.829 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:45 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:53:45 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:53:45 localhost ceph-mon[289473]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:53:45 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:53:45 localhost ceph-mon[289473]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: Activating manager daemon np0005541911.adcgiw Dec 2 04:53:45 localhost ceph-mon[289473]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:53:45 localhost ceph-mon[289473]: Manager daemon np0005541911.adcgiw is now available Dec 2 04:53:45 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch Dec 2 04:53:45 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch Dec 2 04:53:46 localhost systemd[1]: tmp-crun.iYHfDB.mount: Deactivated successfully. Dec 2 04:53:46 localhost podman[290424]: 2025-12-02 09:53:46.584800283 +0000 UTC m=+0.069811169 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 2 04:53:46 localhost podman[290424]: 2025-12-02 09:53:46.662512023 +0000 UTC m=+0.147522939 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, ceph=True, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.825 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.849 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.850 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.851 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.851 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:53:46 localhost nova_compute[281854]: 2025-12-02 09:53:46.851 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e86 _set_new_cache_sizes cache_size:1019817753 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.105 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.326 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:53:47 localhost ceph-mon[289473]: [02/Dec/2025:09:53:46] ENGINE Bus STARTING Dec 2 04:53:47 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:47 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.382 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.383 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.556 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.557 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11837MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.557 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.558 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:53:47 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.689 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.690 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.690 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:53:47 localhost nova_compute[281854]: 2025-12-02 09:53:47.760 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.266 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.308 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.310 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.326 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.328 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:53:48 localhost nova_compute[281854]: 2025-12-02 09:53:48.328 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:53:48 localhost ceph-mon[289473]: [02/Dec/2025:09:53:46] ENGINE Serving on https://172.18.0.105:7150 Dec 2 04:53:48 localhost ceph-mon[289473]: [02/Dec/2025:09:53:46] ENGINE Client ('172.18.0.105', 60410) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:53:48 localhost ceph-mon[289473]: [02/Dec/2025:09:53:47] ENGINE Serving on http://172.18.0.105:8765 Dec 2 04:53:48 localhost ceph-mon[289473]: [02/Dec/2025:09:53:47] ENGINE Bus STARTED Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} v 0) Dec 2 04:53:48 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:53:48 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain.devices.0}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:49 localhost nova_compute[281854]: 2025-12-02 09:53:49.329 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:53:49 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:53:49 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3338051788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:53:50 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:53:50 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:53:50 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} : dispatch Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:53:51 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:51 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541909.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:53:51 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:53:52 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e86 _set_new_cache_sizes cache_size:1020050624 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:52 localhost nova_compute[281854]: 2025-12-02 09:53:52.153 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:52 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:53:52 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:52 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:53:52 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:53:52 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:52 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:52 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:53 localhost nova_compute[281854]: 2025-12-02 09:53:53.310 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:53 localhost ceph-mon[289473]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:53:53 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:53:53 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:53 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:53 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:53 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:54 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:53:54 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:53:54 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:54 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:54 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:54 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:54 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:53:55 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:53:55 localhost podman[291372]: 2025-12-02 09:53:55.419164263 +0000 UTC m=+0.061659881 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:53:55 localhost podman[291372]: 2025-12-02 09:53:55.434169794 +0000 UTC m=+0.076665392 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:53:55 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:53:55 localhost ceph-mon[289473]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:53:55 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:53:55 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:55 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:55 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:55 localhost ceph-mon[289473]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:53:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:53:55 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:53:55 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:55 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:55 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:55 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:53:55 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:53:55 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:55 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:56 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:56 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:56 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:56 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:56 localhost ceph-mon[289473]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:53:56 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:53:56 localhost ceph-mon[289473]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:53:56 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:53:56 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:53:56 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:56 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:57 localhost ceph-mon[289473]: mon.np0005541913@4(peon).osd e86 _set_new_cache_sizes cache_size:1020054660 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:53:57 localhost nova_compute[281854]: 2025-12-02 09:53:57.155 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:57 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:58 localhost nova_compute[281854]: 2025-12-02 09:53:58.338 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:53:58 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:58 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:58 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:53:58 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:58 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:53:58 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:53:58 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:58 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:53:59 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:53:59 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:53:59 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:53:59 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:53:59 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:53:59 localhost ceph-mon[289473]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:53:59 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:54:00 localhost podman[291392]: 2025-12-02 09:54:00.418931049 +0000 UTC m=+0.061839466 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent) Dec 2 04:54:00 localhost podman[291392]: 2025-12-02 09:54:00.452034315 +0000 UTC m=+0.094942642 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:54:00 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:54:00 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:54:00 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:54:00 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:54:00 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:00 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 2 04:54:00 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 2 04:54:00 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:54:00 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:54:00 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:54:00 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:54:00 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:54:00 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:00 localhost ceph-mon[289473]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:00 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "quorum_status"} v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "quorum_status"} : dispatch Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e6 handle_command mon_command({"prefix": "mon rm", "name": "np0005541909"} v 0) Dec 2 04:54:01 localhost ceph-mon[289473]: log_channel(audit) log [INF] : from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon rm", "name": "np0005541909"} : dispatch Dec 2 04:54:01 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503730f20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0 Dec 2 04:54:01 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Dec 2 04:54:01 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0 Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@4(peon) e7 my rank is now 3 (was 4) Dec 2 04:54:01 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503731600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:01 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:54:01 localhost ceph-mon[289473]: paxos.3).electionLogic(26) init, last seen epoch 26 Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:01 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:02 localhost podman[291463]: Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.018814296 +0000 UTC m=+0.063700436 container create 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:02 localhost systemd[1]: Started libpod-conmon-39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1.scope. Dec 2 04:54:02 localhost systemd[1]: Started libcrun container. Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.0892096 +0000 UTC m=+0.134095780 container init 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main) Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.000422203 +0000 UTC m=+0.045308393 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.102876206 +0000 UTC m=+0.147762386 container start 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.103356738 +0000 UTC m=+0.148242948 container attach 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 2 04:54:02 localhost systemd[1]: libpod-39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1.scope: Deactivated successfully. Dec 2 04:54:02 localhost wonderful_shamir[291478]: 167 167 Dec 2 04:54:02 localhost podman[291463]: 2025-12-02 09:54:02.110939941 +0000 UTC m=+0.155826111 container died 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:54:02 localhost nova_compute[281854]: 2025-12-02 09:54:02.158 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:02 localhost podman[291483]: 2025-12-02 09:54:02.218450118 +0000 UTC m=+0.092729162 container remove 39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_shamir, name=rhceph, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True) Dec 2 04:54:02 localhost systemd[1]: libpod-conmon-39ee22f6ba3bffc7b09c003968c022dcd3c328395e2c35d25692411ecd6f64a1.scope: Deactivated successfully. Dec 2 04:54:03 localhost systemd[1]: var-lib-containers-storage-overlay-3e7778c046b51d74afe22d0b31a3bcdc570c202d83282664786bbf2e80d7bb91-merged.mount: Deactivated successfully. Dec 2 04:54:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:54:03.038 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:54:03.039 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:54:03.041 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:54:03 localhost podman[291500]: 2025-12-02 09:54:03.105673393 +0000 UTC m=+0.055830945 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 04:54:03 localhost podman[291500]: 2025-12-02 09:54:03.12314575 +0000 UTC m=+0.073303382 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, version=9.6, config_id=edpm) Dec 2 04:54:03 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:54:03 localhost nova_compute[281854]: 2025-12-02 09:54:03.343 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:04 localhost openstack_network_exporter[242845]: ERROR 09:54:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:54:04 localhost openstack_network_exporter[242845]: ERROR 09:54:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:04 localhost openstack_network_exporter[242845]: ERROR 09:54:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:04 localhost openstack_network_exporter[242845]: ERROR 09:54:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:54:04 localhost openstack_network_exporter[242845]: Dec 2 04:54:04 localhost openstack_network_exporter[242845]: ERROR 09:54:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:54:04 localhost openstack_network_exporter[242845]: Dec 2 04:54:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:54:05 localhost podman[291520]: 2025-12-02 09:54:05.446327324 +0000 UTC m=+0.078280546 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:54:05 localhost podman[291520]: 2025-12-02 09:54:05.483076927 +0000 UTC m=+0.115030139 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:54:05 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:54:06 localhost podman[240799]: time="2025-12-02T09:54:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:54:06 localhost podman[240799]: @ - - [02/Dec/2025:09:54:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:54:06 localhost podman[240799]: @ - - [02/Dec/2025:09:54:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18702 "" "Go-http-client/1.1" Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:06 localhost ceph-mon[289473]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:54:06 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:54:06 localhost ceph-mon[289473]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:54:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:06 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:54:06 localhost ceph-mon[289473]: Remove daemons mon.np0005541909 Dec 2 04:54:06 localhost ceph-mon[289473]: Safe to remove mon.np0005541909: new quorum should be ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:54:06 localhost ceph-mon[289473]: Removing monitor np0005541909 from monmap... Dec 2 04:54:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon rm", "name": "np0005541909"} : dispatch Dec 2 04:54:06 localhost ceph-mon[289473]: Removing daemon mon.np0005541909 from np0005541909.localdomain -- ports [] Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541912 calling monitor election Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541913,np0005541912 in quorum (ranks 0,1,3,4) Dec 2 04:54:06 localhost ceph-mon[289473]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 (MON_DOWN) Dec 2 04:54:06 localhost ceph-mon[289473]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 Dec 2 04:54:06 localhost ceph-mon[289473]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 Dec 2 04:54:06 localhost ceph-mon[289473]: mon.np0005541914 (rank 2) addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] is down (out of quorum) Dec 2 04:54:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:07 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:07 localhost nova_compute[281854]: 2025-12-02 09:54:07.207 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:07 localhost podman[291596]: Dec 2 04:54:07 localhost podman[291596]: 2025-12-02 09:54:07.35296415 +0000 UTC m=+0.038225655 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:08 localhost podman[291596]: 2025-12-02 09:54:08.085156405 +0000 UTC m=+0.770417830 container create 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Dec 2 04:54:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:08 localhost ceph-mon[289473]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:54:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:54:08 localhost ceph-mon[289473]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:54:08 localhost systemd[1]: Started libpod-conmon-32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340.scope. Dec 2 04:54:08 localhost systemd[1]: Started libcrun container. Dec 2 04:54:08 localhost podman[291596]: 2025-12-02 09:54:08.170097639 +0000 UTC m=+0.855359064 container init 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, version=7, distribution-scope=public, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 2 04:54:08 localhost podman[291596]: 2025-12-02 09:54:08.182574702 +0000 UTC m=+0.867836127 container start 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:54:08 localhost podman[291596]: 2025-12-02 09:54:08.182952852 +0000 UTC m=+0.868214277 container attach 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, release=1763362218) Dec 2 04:54:08 localhost silly_jennings[291612]: 167 167 Dec 2 04:54:08 localhost podman[291596]: 2025-12-02 09:54:08.187884184 +0000 UTC m=+0.873145659 container died 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:54:08 localhost systemd[1]: libpod-32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340.scope: Deactivated successfully. Dec 2 04:54:08 localhost podman[291617]: 2025-12-02 09:54:08.301868595 +0000 UTC m=+0.099681649 container remove 32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_jennings, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:54:08 localhost systemd[1]: libpod-conmon-32b8df09b8068be6d015ad8d703309ff874c3c369a6c4084cf6ec5bf8e438340.scope: Deactivated successfully. Dec 2 04:54:08 localhost nova_compute[281854]: 2025-12-02 09:54:08.374 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:08 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:08 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:08 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:09 localhost systemd[1]: var-lib-containers-storage-overlay-ae2a51311215695b268b5158db2a55777facf085935d077086282611735a4371-merged.mount: Deactivated successfully. Dec 2 04:54:09 localhost podman[291695]: Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.167998394 +0000 UTC m=+0.070196789 container create 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Dec 2 04:54:09 localhost systemd[1]: Started libpod-conmon-1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d.scope. Dec 2 04:54:09 localhost systemd[1]: Started libcrun container. Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.226947692 +0000 UTC m=+0.129146087 container init 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container) Dec 2 04:54:09 localhost compassionate_curie[291712]: 167 167 Dec 2 04:54:09 localhost systemd[1]: libpod-1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d.scope: Deactivated successfully. Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.241711347 +0000 UTC m=+0.143909762 container start 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main) Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.242679132 +0000 UTC m=+0.144877527 container attach 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=1763362218, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.144703181 +0000 UTC m=+0.046901596 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:09 localhost podman[291695]: 2025-12-02 09:54:09.245640482 +0000 UTC m=+0.147838947 container died 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:54:09 localhost podman[291717]: 2025-12-02 09:54:09.334372326 +0000 UTC m=+0.084518272 container remove 1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_curie, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , name=rhceph) Dec 2 04:54:09 localhost systemd[1]: libpod-conmon-1794b9e1e66269ba361d09df5796d617cf6d5ca4ca7bbb4cb1bb71c7ce90724d.scope: Deactivated successfully. Dec 2 04:54:09 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:54:09 localhost ceph-mon[289473]: Removed label mon from host np0005541909.localdomain Dec 2 04:54:09 localhost ceph-mon[289473]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:54:09 localhost ceph-mon[289473]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:54:09 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:54:09 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:54:09 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4) Dec 2 04:54:09 localhost ceph-mon[289473]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912) Dec 2 04:54:09 localhost ceph-mon[289473]: Cluster is now healthy Dec 2 04:54:09 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:54:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost podman[291793]: Dec 2 04:54:10 localhost systemd[1]: var-lib-containers-storage-overlay-85ab00339e723e85db99cab163831b14e6f4325c881d2bdb3aff735755c76dc1-merged.mount: Deactivated successfully. Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.103256484 +0000 UTC m=+0.078809270 container create f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, ceph=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, vcs-type=git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Dec 2 04:54:10 localhost systemd[1]: Started libpod-conmon-f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632.scope. Dec 2 04:54:10 localhost systemd[1]: Started libcrun container. Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.069422309 +0000 UTC m=+0.044975085 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.179521765 +0000 UTC m=+0.155074521 container init f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, RELEASE=main, vcs-type=git, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1763362218, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7) Dec 2 04:54:10 localhost busy_meitner[291808]: 167 167 Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.190088098 +0000 UTC m=+0.165640874 container start f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=) Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.193089408 +0000 UTC m=+0.168642184 container attach f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux ) Dec 2 04:54:10 localhost systemd[1]: libpod-f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632.scope: Deactivated successfully. Dec 2 04:54:10 localhost podman[291793]: 2025-12-02 09:54:10.196274554 +0000 UTC m=+0.171827360 container died f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 2 04:54:10 localhost podman[291813]: 2025-12-02 09:54:10.297105102 +0000 UTC m=+0.092628950 container remove f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_meitner, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Dec 2 04:54:10 localhost systemd[1]: libpod-conmon-f49d98988209681cea4bdd37bb15c8498fc901847d26c824f12751965d18a632.scope: Deactivated successfully. Dec 2 04:54:10 localhost ceph-mon[289473]: Removed label mgr from host np0005541909.localdomain Dec 2 04:54:10 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:54:10 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:10 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:54:10 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:10 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:10 localhost podman[291881]: Dec 2 04:54:10 localhost podman[291881]: 2025-12-02 09:54:10.97970071 +0000 UTC m=+0.052130946 container create b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:11 localhost systemd[1]: Started libpod-conmon-b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6.scope. Dec 2 04:54:11 localhost systemd[1]: Started libcrun container. Dec 2 04:54:11 localhost podman[291881]: 2025-12-02 09:54:11.020279736 +0000 UTC m=+0.092709972 container init b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4) Dec 2 04:54:11 localhost interesting_bhabha[291896]: 167 167 Dec 2 04:54:11 localhost podman[291881]: 2025-12-02 09:54:11.027997533 +0000 UTC m=+0.100427769 container start b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container) Dec 2 04:54:11 localhost podman[291881]: 2025-12-02 09:54:11.028212878 +0000 UTC m=+0.100643114 container attach b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, release=1763362218, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:11 localhost systemd[1]: libpod-b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6.scope: Deactivated successfully. Dec 2 04:54:11 localhost podman[291881]: 2025-12-02 09:54:11.030297534 +0000 UTC m=+0.102727750 container died b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:54:11 localhost podman[291881]: 2025-12-02 09:54:10.956983071 +0000 UTC m=+0.029413347 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:11 localhost systemd[1]: var-lib-containers-storage-overlay-6489660dc58f8ccd2802868b9ab7e4f2cd3205b7827f6e9dfd3c157917e2ee6e-merged.mount: Deactivated successfully. Dec 2 04:54:11 localhost systemd[1]: var-lib-containers-storage-overlay-3fe33b4ffe6dcbd3e9c97c3cc95d8163b0e9aaed797836668f4857574d4acff2-merged.mount: Deactivated successfully. Dec 2 04:54:11 localhost podman[291901]: 2025-12-02 09:54:11.174441272 +0000 UTC m=+0.133754041 container remove b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_bhabha, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218) Dec 2 04:54:11 localhost systemd[1]: libpod-conmon-b659bba46845dbb9bd3b97097830fccaf43c81fbbca8dea2cc9fb4be15ea5de6.scope: Deactivated successfully. Dec 2 04:54:11 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:54:11 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:54:11 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[289473]: Removed label _admin from host np0005541909.localdomain Dec 2 04:54:11 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:11 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:54:11 localhost podman[291971]: 2025-12-02 09:54:11.861465008 +0000 UTC m=+0.070625111 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 2 04:54:11 localhost podman[291979]: Dec 2 04:54:11 localhost podman[291971]: 2025-12-02 09:54:11.902126796 +0000 UTC m=+0.111286869 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125) Dec 2 04:54:11 localhost podman[291979]: 2025-12-02 09:54:11.909781831 +0000 UTC m=+0.108081324 container create 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, release=1763362218) Dec 2 04:54:11 localhost podman[291979]: 2025-12-02 09:54:11.826592795 +0000 UTC m=+0.024892268 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:11 localhost systemd[1]: Started libpod-conmon-7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66.scope. Dec 2 04:54:11 localhost systemd[1]: Started libcrun container. Dec 2 04:54:12 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:12 localhost nova_compute[281854]: 2025-12-02 09:54:12.210 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:12 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:54:12 localhost podman[291979]: 2025-12-02 09:54:12.236578076 +0000 UTC m=+0.434877569 container init 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, version=7, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:12 localhost podman[291979]: 2025-12-02 09:54:12.244761935 +0000 UTC m=+0.443061418 container start 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 2 04:54:12 localhost podman[291979]: 2025-12-02 09:54:12.245043903 +0000 UTC m=+0.443343436 container attach 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Dec 2 04:54:12 localhost practical_chebyshev[292005]: 167 167 Dec 2 04:54:12 localhost systemd[1]: libpod-7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66.scope: Deactivated successfully. Dec 2 04:54:12 localhost podman[291979]: 2025-12-02 09:54:12.248181567 +0000 UTC m=+0.446481050 container died 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4) Dec 2 04:54:12 localhost systemd[1]: var-lib-containers-storage-overlay-da8646c33ff3f47a07a3651a2f7b8a5cc736c7c37b7ef76d1db0b144f92da39b-merged.mount: Deactivated successfully. Dec 2 04:54:12 localhost podman[292010]: 2025-12-02 09:54:12.346166219 +0000 UTC m=+0.093790770 container remove 7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_chebyshev, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:12 localhost systemd[1]: libpod-conmon-7a03194a013b93c0b68ab81fe12de1d3ac89f930cf7e521ae0f021917b054c66.scope: Deactivated successfully. Dec 2 04:54:12 localhost ceph-mon[289473]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:54:12 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:54:12 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:12 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:12 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:13 localhost nova_compute[281854]: 2025-12-02 09:54:13.377 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:13 localhost sshd[292027]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:54:13 localhost ceph-mon[289473]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:54:13 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:54:13 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:13 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:13 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:54:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:54:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:54:14 localhost podman[292028]: 2025-12-02 09:54:14.457104904 +0000 UTC m=+0.085377507 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:14 localhost podman[292028]: 2025-12-02 09:54:14.473140583 +0000 UTC m=+0.101413146 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:54:14 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:54:14 localhost podman[292029]: 2025-12-02 09:54:14.552826425 +0000 UTC m=+0.181006995 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:54:14 localhost podman[292029]: 2025-12-02 09:54:14.656918191 +0000 UTC m=+0.285098821 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 04:54:14 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:54:14 localhost ceph-mon[289473]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:54:14 localhost ceph-mon[289473]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:54:14 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:14 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:14 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:54:15 localhost ceph-mon[289473]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:54:15 localhost ceph-mon[289473]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:54:15 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:15 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:15 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.103 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.104 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.134 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.134 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8d541d84-cbb6-4005-b733-d36ade130ce3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.104368', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd15a046-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': 'fbee31c758f4ee472b69df937fb94df26f5c5d16220cd4aca3b3a6d0bc11f596'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.104368', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd15b2ca-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': 'f848ba4ec9efc6e4ac4c07e9fe702ab2261dccf1589cfa52b1121f10916b0656'}]}, 'timestamp': '2025-12-02 09:54:16.135353', '_unique_id': 'efb326e67b7246e79d7407980908e0c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.138 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.138 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.138 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b0fe9216-5639-4367-be0f-8bf92edef5c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.138200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd163326-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '5ea084b6f9c90767c4b953f15dddf77a7546bc2b2634678f4e51701440422047'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.138200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd1644ec-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '534dcacb174fd53e7c19e5845797794de5720e562b515222d2834d848976fda3'}]}, 'timestamp': '2025-12-02 09:54:16.139084', '_unique_id': '7dd55e0ea6ae406aabae1b1a4c397881'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.140 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.141 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.141 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.141 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1495ef8-ca7a-47a6-be44-f78f68d65d92', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.141298', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd16abe4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': 'f74fbea2a8b208c3ab8170046dc380aa32aae667c120b41d48a0f7b1df4914b4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.141298', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd16bd0a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '02efb35f7f0f51a57c348a7bd7737dcf398845240a8ce9529b6541a8630d38c1'}]}, 'timestamp': '2025-12-02 09:54:16.142153', '_unique_id': 'aaf7496edd6e4d77ada19e843715dac6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.144 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.162 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 13350000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '729b53e8-29cc-488b-bba6-d50cf1ecbc90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13350000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:54:16.144262', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'dd19d972-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.381069609, 'message_signature': '0f2fa06e9c30b2bfe4a4d16ec85e5aa4c9a757b44dc2daca39470ab4eea0eab1'}]}, 'timestamp': '2025-12-02 09:54:16.162558', '_unique_id': 'aacf3e2b931e4c1b8a32e3eee041ede1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.164 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.176 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.177 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15f79ac5-d995-4cb1-8279-03310f8b3a5e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.165058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd1c116a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': '4325d9be482660e60d75eec8c6dd740e660f6fc7cae60486e85fc9a60e112192'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.165058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd1c227c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': 'd9892791bf75011fa1762fa8e98043854afb9a903d28732ef838ffcdb9e96e67'}]}, 'timestamp': '2025-12-02 09:54:16.177524', '_unique_id': '0a5d3d2ea8fe4e52863dc2574c4d4e37'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.183 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3390e43d-797a-4d5e-98ee-7bf9db9f9c71', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.180001', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd1d12ae-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': '076942a880ef583bc3c24661aa5c7341ab5e3e7a96eedf880beeb273433481da'}]}, 'timestamp': '2025-12-02 09:54:16.183872', '_unique_id': 'cd934a25a7924d29bf35adbb974313a8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.186 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.186 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13626cc6-c422-45dc-970a-44334c0554fd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.186581', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd1d96d4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': '4165fc617518c096c2c4629bf32131ffab171b41d8e559f6aa7c4aa1db7ca629'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.186581', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd1da962-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': 'add70e4876cdb53a9c83939bd4a70ea6585ebb7105e161b66e159a7aedf9d8ed'}]}, 'timestamp': '2025-12-02 09:54:16.187529', '_unique_id': '88152f438db84adb8df4cd8e37667336'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.189 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ee60d88-6422-4498-8ab1-1eb7cd76bd2c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.190058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd1e1d5c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': 'ea8cb696c0d6abbea6ddba4bad5d7c937b955f117e944baef2befcf65ba4d3b5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.190058', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd1e2f18-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '94814e785e04164fd96cad3d0b0d7df20c8d60f0b7570c699e092d89dacda2a3'}]}, 'timestamp': '2025-12-02 09:54:16.190957', '_unique_id': 'f0a2a7d886d844b78f63c8687dd59330'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb6ef30f-f748-48e4-9981-73d6e93b8847', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.193648', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd1eaa42-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': 'f303330f5755819618feb4fbdb87e5b6fa5eec6544ee1f5034f340a1a74d6cc1'}]}, 'timestamp': '2025-12-02 09:54:16.194155', '_unique_id': '8d8c9253369f4f5e9d21cf05a5edbb1d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d067ea4-5574-4223-9c17-51ac15e87650', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.196426', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd1f1838-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': '8c2c17917163790c91a86fcba300a3250fdd5361dc8558ba857f6e365e69ed99'}]}, 'timestamp': '2025-12-02 09:54:16.197091', '_unique_id': '5b525bf103aa41e787972e68e044fcb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f43e17d-659e-4ed3-864b-f36d20dd6134', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.199979', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd1fa56e-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': 'b280428baa3873b246e429f363dc0815c2640a90d1c743da694b0e3b1a7260dd'}]}, 'timestamp': '2025-12-02 09:54:16.200803', '_unique_id': '78e8a6fc6446490dbfa1c3ddc11c875a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9afa8b90-2c50-424d-9fce-536850cdf53a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.203357', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd20270a-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': 'ed2ce932eef102ae9de67cd8bdfecec1135e53089f09b8c3c12bf611581d649f'}]}, 'timestamp': '2025-12-02 09:54:16.203895', '_unique_id': 'bef7bdc7448d4e28951f392885f8ff7c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '237a5e49-8b50-4347-8b87-c7923e77814e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:54:16.206518', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'dd20a2b6-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.381069609, 'message_signature': '033be59db4cd5b4a7b91593f716b3cf26753d9e7b5c5203b35e1c73cbd04ee3d'}]}, 'timestamp': '2025-12-02 09:54:16.207069', '_unique_id': '994fae2f1b1c4317b0dca9ba00f7d5a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a49b0195-c36d-4945-a11f-dcb00040a208', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.209686', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd211c96-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': 'a11d699c67e7a4ce2a0a15376adc91481f1473ab61c672585e22558f04163a3b'}]}, 'timestamp': '2025-12-02 09:54:16.210188', '_unique_id': '9f6d1b351c894858ae2f0caf76a2e9b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.212 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '158432e9-67ec-4f48-89f6-846d4edc11c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.212384', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd218802-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': '71bf494cea669b465acf07eaa57f06468a017e7f0f4f8d352abfa9af972f6ae8'}]}, 'timestamp': '2025-12-02 09:54:16.212919', '_unique_id': '412ca99a79864c7f8f9207fd0287f12f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0c6bbe2f-4d50-409e-9598-2167cc8d5b68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.215077', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd21ee78-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': 'd3f969ec8045cae1e9ef82447dddb45031d504b38f4db05009cf7ccc4fa93c2d'}]}, 'timestamp': '2025-12-02 09:54:16.215538', '_unique_id': '5ea9633e18f748cc92c24ec2e8c224c7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.217 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.217 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.218 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '12ea32dc-3232-40d5-8d08-1766ffbd3f20', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.217694', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd2251a6-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': '4faa72146f75585fa1629b0cc04b97826efbb2af27f36fdda859fca0bece26b8'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.217694', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd225e76-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.384150372, 'message_signature': '7a218dcf9565ba7ecc5d3642347d170aabc910c86e2f3f294365cc83d02d0326'}]}, 'timestamp': '2025-12-02 09:54:16.218343', '_unique_id': 'a1f784cdbd9b4845a9b6b8e685be3003'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a1a2fb1-d1b1-41f6-b21f-ae3180107ff9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.220141', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd22b268-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '09912e7c0583b8f93cca994853ab4c407891fa4dfa842e30e4e9caf07798af0b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.220141', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd22c1a4-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '67da88bf3525d9833eaefa82939780f20ad0700feaf82a8d802a4b3e2943eeb4'}]}, 'timestamp': '2025-12-02 09:54:16.220900', '_unique_id': '244dec082223405091908ab8e6cd3f4d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.222 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.222 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f08a63e1-90eb-443a-9313-06966e7ccfce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.222892', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd231dde-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': '9304f5cff1efaf722b471f51fca18bbdf75a64dc93ce27079f5453fa16572b05'}]}, 'timestamp': '2025-12-02 09:54:16.223268', '_unique_id': '229dfa199bdd4538b6add847140c20cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.224 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.224 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1318373b-ef4c-4a9e-854a-dc247ad34dd2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:54:16.224861', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'dd236906-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '8f501cc8a627da50cc7953b174e439b7d1b5bd00765b75c00cbd4972e20bc0ba'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:54:16.224861', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'dd237298-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.323441607, 'message_signature': '5b4b814705237155fb16973f471856db1691c33737dd4eb490f5f975e68c784e'}]}, 'timestamp': '2025-12-02 09:54:16.225367', '_unique_id': 'b520f126090340a5a84aac083b539e38'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b1e88bd-68b9-4965-b8b4-ea90da1ccb1e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:54:16.226729', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'dd23b21c-cf64-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11418.399084592, 'message_signature': '7f849aa259040d83fe5c47dcbae7fb61ae47e189b41df2e70e1f68e2b8094ddd'}]}, 'timestamp': '2025-12-02 09:54:16.227011', '_unique_id': '080921cc203c4f01bf303f6cf6bb1506'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 04:54:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:54:16.228 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:54:16 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:54:16 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:54:16 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:16 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:16 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:17 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:17 localhost nova_compute[281854]: 2025-12-02 09:54:17.250 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:17 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:54:17 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:54:17 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:17 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:17 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:18 localhost nova_compute[281854]: 2025-12-02 09:54:18.428 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:18 localhost ceph-mon[289473]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:54:18 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:54:18 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:18 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:20 localhost ceph-mon[289473]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:20 localhost ceph-mon[289473]: Removing np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:54:20 localhost ceph-mon[289473]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:54:20 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:20 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:21 localhost ceph-mon[289473]: Removing daemon mgr.np0005541909.kfesnk from np0005541909.localdomain -- ports [9283, 8765] Dec 2 04:54:22 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:22 localhost nova_compute[281854]: 2025-12-02 09:54:22.273 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:22 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:22 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:23 localhost nova_compute[281854]: 2025-12-02 09:54:23.472 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:23 localhost ceph-mon[289473]: Added label _no_schedule to host np0005541909.localdomain Dec 2 04:54:23 localhost ceph-mon[289473]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541909.localdomain Dec 2 04:54:23 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"} : dispatch Dec 2 04:54:23 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"}]': finished Dec 2 04:54:23 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:23 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost ceph-mon[289473]: Removing key for mgr.np0005541909.kfesnk Dec 2 04:54:24 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:24 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:24 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:54:25 localhost podman[292429]: 2025-12-02 09:54:25.557084375 +0000 UTC m=+0.078781199 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 2 04:54:25 localhost podman[292429]: 2025-12-02 09:54:25.568946763 +0000 UTC m=+0.090643587 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 04:54:25 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:54:25 localhost ceph-mon[289473]: Removing daemon crash.np0005541909 from np0005541909.localdomain -- ports [] Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"} : dispatch Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"}]': finished Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"} : dispatch Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"}]': finished Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:25 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[289473]: Removed host np0005541909.localdomain Dec 2 04:54:26 localhost ceph-mon[289473]: Removing key for client.crash.np0005541909.localdomain Dec 2 04:54:26 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:26 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:26 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:27 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:27 localhost nova_compute[281854]: 2025-12-02 09:54:27.275 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:27 localhost ceph-mon[289473]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:54:27 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:54:27 localhost ceph-mon[289473]: Reconfiguring mon.np0005541910 (monmap changed)... Dec 2 04:54:27 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:54:27 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:27 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:27 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:28 localhost nova_compute[281854]: 2025-12-02 09:54:28.474 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:28 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:54:28 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:54:28 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[289473]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:54:29 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:29 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:54:29 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:29 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:54:29 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:29 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:54:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:54:31 localhost systemd[1]: tmp-crun.lbXHEm.mount: Deactivated successfully. Dec 2 04:54:31 localhost podman[292468]: 2025-12-02 09:54:31.455310118 +0000 UTC m=+0.095209359 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 04:54:31 localhost podman[292468]: 2025-12-02 09:54:31.484970861 +0000 UTC m=+0.124870142 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:54:31 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:31 localhost ceph-mon[289473]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:54:31 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:31 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:54:31 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:54:32 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:32 localhost nova_compute[281854]: 2025-12-02 09:54:32.279 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[289473]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:32 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:32 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:54:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:54:33 localhost podman[292486]: 2025-12-02 09:54:33.427646102 +0000 UTC m=+0.070421196 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 04:54:33 localhost podman[292486]: 2025-12-02 09:54:33.441729208 +0000 UTC m=+0.084504252 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible) Dec 2 04:54:33 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:54:33 localhost nova_compute[281854]: 2025-12-02 09:54:33.477 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:33 localhost ceph-mon[289473]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:54:33 localhost ceph-mon[289473]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:54:33 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:33 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:54:34 localhost openstack_network_exporter[242845]: ERROR 09:54:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:34 localhost openstack_network_exporter[242845]: ERROR 09:54:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:54:34 localhost openstack_network_exporter[242845]: ERROR 09:54:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:54:34 localhost openstack_network_exporter[242845]: ERROR 09:54:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:54:34 localhost openstack_network_exporter[242845]: Dec 2 04:54:34 localhost openstack_network_exporter[242845]: ERROR 09:54:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:54:34 localhost openstack_network_exporter[242845]: Dec 2 04:54:34 localhost ceph-mon[289473]: Saving service mon spec with placement label:mon Dec 2 04:54:34 localhost ceph-mon[289473]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:54:34 localhost ceph-mon[289473]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:54:34 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:34 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:34 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:35 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:54:35 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:54:35 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:35 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:35 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:35 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x5645037311e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:35 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:54:35 localhost ceph-mon[289473]: paxos.3).electionLogic(32) init, last seen epoch 32 Dec 2 04:54:35 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:36 localhost podman[240799]: time="2025-12-02T09:54:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:54:36 localhost podman[240799]: @ - - [02/Dec/2025:09:54:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:54:36 localhost podman[240799]: @ - - [02/Dec/2025:09:54:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18704 "" "Go-http-client/1.1" Dec 2 04:54:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:54:36 localhost podman[292506]: 2025-12-02 09:54:36.451882486 +0000 UTC m=+0.086593368 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:54:36 localhost podman[292506]: 2025-12-02 09:54:36.464153255 +0000 UTC m=+0.098864087 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:54:36 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:54:37 localhost nova_compute[281854]: 2025-12-02 09:54:37.281 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:38 localhost nova_compute[281854]: 2025-12-02 09:54:38.483 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:40 localhost ceph-mon[289473]: paxos.3).electionLogic(33) init, last seen epoch 33, mid-election, bumping Dec 2 04:54:40 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:40 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:41 localhost podman[292583]: Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.308498201 +0000 UTC m=+0.056097862 container create af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:41 localhost systemd[1]: Started libpod-conmon-af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f.scope. Dec 2 04:54:41 localhost systemd[1]: Started libcrun container. Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.283237665 +0000 UTC m=+0.030837306 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.383706974 +0000 UTC m=+0.131306595 container init af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, version=7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.396175247 +0000 UTC m=+0.143774878 container start af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=1763362218, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.396534037 +0000 UTC m=+0.144133678 container attach af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:41 localhost suspicious_dirac[292598]: 167 167 Dec 2 04:54:41 localhost systemd[1]: libpod-af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f.scope: Deactivated successfully. Dec 2 04:54:41 localhost podman[292583]: 2025-12-02 09:54:41.4015134 +0000 UTC m=+0.149113141 container died af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:54:41 localhost podman[292603]: 2025-12-02 09:54:41.50576787 +0000 UTC m=+0.091826428 container remove af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_dirac, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main) Dec 2 04:54:41 localhost systemd[1]: libpod-conmon-af67c11c156a0e3bae599c7f9492e1928a7b153afce1d33cf9f60cc4be751b3f.scope: Deactivated successfully. Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: Health check failed: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914 (MON_DOWN) Dec 2 04:54:41 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:54:41 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3) Dec 2 04:54:41 localhost ceph-mon[289473]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914) Dec 2 04:54:41 localhost ceph-mon[289473]: Cluster is now healthy Dec 2 04:54:41 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[289473]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:41 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:41 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:54:42 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:42 localhost podman[292674]: Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.213815499 +0000 UTC m=+0.083440674 container create 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:42 localhost systemd[1]: Started libpod-conmon-51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f.scope. Dec 2 04:54:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:54:42 localhost systemd[1]: Started libcrun container. Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.178754961 +0000 UTC m=+0.048380176 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.289821333 +0000 UTC m=+0.159446478 container init 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Dec 2 04:54:42 localhost nova_compute[281854]: 2025-12-02 09:54:42.316 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:42 localhost systemd[1]: tmp-crun.O35XT5.mount: Deactivated successfully. Dec 2 04:54:42 localhost systemd[1]: var-lib-containers-storage-overlay-cc0638aa22702c8e0b8ba9ee737d08ad71547f53218d7fc3a677eb6fe86b53a2-merged.mount: Deactivated successfully. Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.328722074 +0000 UTC m=+0.198347209 container start 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.329256028 +0000 UTC m=+0.198881163 container attach 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, architecture=x86_64, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git) Dec 2 04:54:42 localhost goofy_allen[292689]: 167 167 Dec 2 04:54:42 localhost systemd[1]: libpod-51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f.scope: Deactivated successfully. Dec 2 04:54:42 localhost podman[292674]: 2025-12-02 09:54:42.331648393 +0000 UTC m=+0.201273558 container died 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, maintainer=Guillaume Abrioux , release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 2 04:54:42 localhost podman[292690]: 2025-12-02 09:54:42.343827498 +0000 UTC m=+0.070954609 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:54:42 localhost podman[292690]: 2025-12-02 09:54:42.364272386 +0000 UTC m=+0.091399527 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 04:54:42 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:54:42 localhost systemd[1]: var-lib-containers-storage-overlay-49966b7a1e1c663a518f169632b4de751094d41633e12358cfad7afd9a638765-merged.mount: Deactivated successfully. Dec 2 04:54:42 localhost podman[292708]: 2025-12-02 09:54:42.457851 +0000 UTC m=+0.116809287 container remove 51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_allen, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=1763362218, io.openshift.tags=rhceph ceph) Dec 2 04:54:42 localhost systemd[1]: libpod-conmon-51ffb19b36e8ea39570ffcc2746ea74da6b8a158a6a62d09a7110fd228df694f.scope: Deactivated successfully. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.638466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282638571, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 3170, "num_deletes": 517, "total_data_size": 9133097, "memory_usage": 9700784, "flush_reason": "Manual Compaction"} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282678722, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5540365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10020, "largest_seqno": 13185, "table_properties": {"data_size": 5527476, "index_size": 7666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4165, "raw_key_size": 35681, "raw_average_key_size": 21, "raw_value_size": 5497233, "raw_average_value_size": 3327, "num_data_blocks": 331, "num_entries": 1652, "num_filter_entries": 1652, "num_deletions": 516, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669219, "oldest_key_time": 1764669219, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 40420 microseconds, and 9035 cpu microseconds. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.678877) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5540365 bytes OK Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.678945) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681063) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681099) EVENT_LOG_v1 {"time_micros": 1764669282681091, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681124) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9116858, prev total WAL file size 9165647, number of live WAL files 2. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682790) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5410KB)], [15(8874KB)] Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282682836, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14627416, "oldest_snapshot_seqno": -1} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9819 keys, 12520209 bytes, temperature: kUnknown Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282773472, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12520209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12463448, "index_size": 31124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 261997, "raw_average_key_size": 26, "raw_value_size": 12294357, "raw_average_value_size": 1252, "num_data_blocks": 1189, "num_entries": 9819, "num_filter_entries": 9819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.773881) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12520209 bytes Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.775552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 138.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.3, 8.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(4.9) write-amplify(2.3) OK, records in: 10901, records dropped: 1082 output_compression: NoCompression Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.775588) EVENT_LOG_v1 {"time_micros": 1764669282775572, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90748, "compaction_time_cpu_micros": 27991, "output_level": 6, "num_output_files": 1, "total_output_size": 12520209, "num_input_records": 10901, "num_output_records": 9819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282776702, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282778281, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.778316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.778323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.778326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.778329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:42.778331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:42 localhost ceph-mon[289473]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:54:42 localhost ceph-mon[289473]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:54:42 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:42 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:42 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:54:42 localhost nova_compute[281854]: 2025-12-02 09:54:42.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:43 localhost podman[292786]: Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.248621833 +0000 UTC m=+0.052140116 container create fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, version=7) Dec 2 04:54:43 localhost systemd[1]: Started libpod-conmon-fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b.scope. Dec 2 04:54:43 localhost systemd[1]: Started libcrun container. Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.299930796 +0000 UTC m=+0.103449089 container init fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.307285333 +0000 UTC m=+0.110803616 container start fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, RELEASE=main, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.307469997 +0000 UTC m=+0.110988310 container attach fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 2 04:54:43 localhost sad_raman[292801]: 167 167 Dec 2 04:54:43 localhost systemd[1]: libpod-fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b.scope: Deactivated successfully. Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.311794414 +0000 UTC m=+0.115312717 container died fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, io.buildah.version=1.41.4, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, name=rhceph, GIT_CLEAN=True) Dec 2 04:54:43 localhost podman[292786]: 2025-12-02 09:54:43.223464199 +0000 UTC m=+0.026982532 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:43 localhost systemd[1]: var-lib-containers-storage-overlay-975f9be749d384e67e5da09162bbe797c10c50b15982d4c8ea024a86b856ed2d-merged.mount: Deactivated successfully. Dec 2 04:54:43 localhost podman[292806]: 2025-12-02 09:54:43.40431353 +0000 UTC m=+0.078583434 container remove fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_raman, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, release=1763362218, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:43 localhost systemd[1]: libpod-conmon-fe97f2c7084d0c4fed940777dcf6cdec965ccb622a532637fb15996a7af8d63b.scope: Deactivated successfully. Dec 2 04:54:43 localhost nova_compute[281854]: 2025-12-02 09:54:43.520 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:43 localhost ceph-mon[289473]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:54:43 localhost ceph-mon[289473]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:54:43 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:43 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:43 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:44 localhost podman[292883]: Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.03991807 +0000 UTC m=+0.045328544 container create 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:54:44 localhost systemd[1]: Started libpod-conmon-6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25.scope. Dec 2 04:54:44 localhost systemd[1]: Started libcrun container. Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.103773589 +0000 UTC m=+0.109184033 container init 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, release=1763362218) Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.113217921 +0000 UTC m=+0.118628395 container start 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, CEPH_POINT_RELEASE=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.113443707 +0000 UTC m=+0.118854171 container attach 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Dec 2 04:54:44 localhost naughty_fermat[292898]: 167 167 Dec 2 04:54:44 localhost systemd[1]: libpod-6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25.scope: Deactivated successfully. Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.116957791 +0000 UTC m=+0.122368265 container died 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, CEPH_POINT_RELEASE=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 2 04:54:44 localhost podman[292883]: 2025-12-02 09:54:44.020357306 +0000 UTC m=+0.025767740 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:44 localhost podman[292903]: 2025-12-02 09:54:44.18939713 +0000 UTC m=+0.060737466 container remove 6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_fermat, maintainer=Guillaume Abrioux , GIT_BRANCH=main, version=7, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4) Dec 2 04:54:44 localhost systemd[1]: libpod-conmon-6a338a69d54bc7cddaff20e9af58ca43cafd8727d73431bdcf10a5b996e5ea25.scope: Deactivated successfully. Dec 2 04:54:44 localhost systemd[1]: var-lib-containers-storage-overlay-33c47d5d8abfd6bcc148330ef95dfbec3ad7883f8d9c96a56ac7fc6b10227ee0-merged.mount: Deactivated successfully. Dec 2 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:54:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:54:44 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:54:44 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:54:44 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:44 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:44 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:44 localhost podman[292972]: 2025-12-02 09:54:44.816719209 +0000 UTC m=+0.060067339 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true) Dec 2 04:54:44 localhost nova_compute[281854]: 2025-12-02 09:54:44.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:44 localhost nova_compute[281854]: 2025-12-02 09:54:44.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:54:44 localhost podman[292987]: Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.83620884 +0000 UTC m=+0.060968833 container create 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True) Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.812467485 +0000 UTC m=+0.037227488 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:54:44 localhost systemd[1]: Started libpod-conmon-30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676.scope. Dec 2 04:54:44 localhost podman[292971]: 2025-12-02 09:54:44.927304229 +0000 UTC m=+0.168051139 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:44 localhost podman[292972]: 2025-12-02 09:54:44.934808129 +0000 UTC m=+0.178156239 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller) Dec 2 04:54:44 localhost systemd[1]: Started libcrun container. Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.946040139 +0000 UTC m=+0.170800172 container init 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 2 04:54:44 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.954617809 +0000 UTC m=+0.179377802 container start 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.955657967 +0000 UTC m=+0.180417970 container attach 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Dec 2 04:54:44 localhost fervent_elion[293033]: 167 167 Dec 2 04:54:44 localhost systemd[1]: libpod-30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676.scope: Deactivated successfully. Dec 2 04:54:44 localhost podman[292987]: 2025-12-02 09:54:44.957875537 +0000 UTC m=+0.182635530 container died 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:54:44 localhost podman[292971]: 2025-12-02 09:54:44.991053854 +0000 UTC m=+0.231800814 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:54:45 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:54:45 localhost podman[293041]: 2025-12-02 09:54:45.077357214 +0000 UTC m=+0.110549899 container remove 30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_elion, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph) Dec 2 04:54:45 localhost systemd[1]: libpod-conmon-30bcf210668a25bf48a22e0a56bf0d970964d1b53c572fc4019a1808cab36676.scope: Deactivated successfully. Dec 2 04:54:45 localhost systemd[1]: var-lib-containers-storage-overlay-43529ba9346b7711928e0610390f5fa37417762ac54993ba1b99fc5a1c13ea7f-merged.mount: Deactivated successfully. Dec 2 04:54:45 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:54:45 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:54:45 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:45 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:45 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:45 localhost nova_compute[281854]: 2025-12-02 09:54:45.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:45 localhost nova_compute[281854]: 2025-12-02 09:54:45.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:54:45 localhost nova_compute[281854]: 2025-12-02 09:54:45.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:54:46 localhost nova_compute[281854]: 2025-12-02 09:54:46.585 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:54:46 localhost nova_compute[281854]: 2025-12-02 09:54:46.585 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:54:46 localhost nova_compute[281854]: 2025-12-02 09:54:46.585 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:54:46 localhost nova_compute[281854]: 2025-12-02 09:54:46.585 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:54:46 localhost ceph-mon[289473]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:54:46 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:54:46 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:46 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:46 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:54:46 localhost nova_compute[281854]: 2025-12-02 09:54:46.938 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:54:47 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.140 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.141 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.141 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.142 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.376 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:54:47 localhost ceph-mon[289473]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:54:47 localhost ceph-mon[289473]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:54:47 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:47 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:47 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.936 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.937 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.937 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.937 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:54:47 localhost nova_compute[281854]: 2025-12-02 09:54:47.938 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:54:48 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:54:48 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2742484200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.324 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.403 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.403 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.582 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.691 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.693 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11812MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.693 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.693 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:54:48 localhost ceph-mon[289473]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:54:48 localhost ceph-mon[289473]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:48 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.914 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.915 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.916 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:54:48 localhost nova_compute[281854]: 2025-12-02 09:54:48.949 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.315458) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289315526, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 520, "num_deletes": 256, "total_data_size": 621723, "memory_usage": 632808, "flush_reason": "Manual Compaction"} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289320353, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 358219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13190, "largest_seqno": 13705, "table_properties": {"data_size": 355267, "index_size": 935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7391, "raw_average_key_size": 19, "raw_value_size": 349101, "raw_average_value_size": 921, "num_data_blocks": 39, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669282, "oldest_key_time": 1764669282, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4939 microseconds, and 1866 cpu microseconds. Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.320405) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 358219 bytes OK Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.320432) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.322094) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.322119) EVENT_LOG_v1 {"time_micros": 1764669289322112, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.322144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 618499, prev total WAL file size 618499, number of live WAL files 2. Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.322813) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end) Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(349KB)], [18(11MB)] Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289322922, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 12878428, "oldest_snapshot_seqno": -1} Dec 2 04:54:49 localhost nova_compute[281854]: 2025-12-02 09:54:49.387 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:54:49 localhost nova_compute[281854]: 2025-12-02 09:54:49.395 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9668 keys, 12768934 bytes, temperature: kUnknown Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289424760, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 12768934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12712157, "index_size": 31524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 260004, "raw_average_key_size": 26, "raw_value_size": 12544692, "raw_average_value_size": 1297, "num_data_blocks": 1204, "num_entries": 9668, "num_filter_entries": 9668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.425197) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 12768934 bytes Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.427190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.3 rd, 125.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(71.6) write-amplify(35.6) OK, records in: 10198, records dropped: 530 output_compression: NoCompression Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.427243) EVENT_LOG_v1 {"time_micros": 1764669289427220, "job": 8, "event": "compaction_finished", "compaction_time_micros": 101949, "compaction_time_cpu_micros": 39892, "output_level": 6, "num_output_files": 1, "total_output_size": 12768934, "num_input_records": 10198, "num_output_records": 9668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289427665, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289431116, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.322723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.431239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.431248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.431252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.431256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:54:49.431259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:54:49 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:54:49 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:54:49 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:54:49 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:54:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:49 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:50 localhost nova_compute[281854]: 2025-12-02 09:54:50.035 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:54:50 localhost nova_compute[281854]: 2025-12-02 09:54:50.038 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:54:50 localhost nova_compute[281854]: 2025-12-02 09:54:50.039 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:54:51 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:51 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:51 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:54:51 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:51 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:54:52 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:54:52 localhost nova_compute[281854]: 2025-12-02 09:54:52.380 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:53 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:54:53 localhost ceph-mon[289473]: Deploying daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:54:53 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:53 localhost nova_compute[281854]: 2025-12-02 09:54:53.585 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:54 localhost ceph-mon[289473]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:54:54 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:54:54 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:54:54 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:54 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:54:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:54:55 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:54:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:54:55 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 2 04:54:55 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503731600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:54:55 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:54:55 localhost ceph-mon[289473]: paxos.3).electionLogic(38) init, last seen epoch 38 Dec 2 04:54:55 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:55 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:55 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:54:56 localhost podman[293509]: 2025-12-02 09:54:56.433485802 +0000 UTC m=+0.073004514 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 04:54:56 localhost podman[293509]: 2025-12-02 09:54:56.445022931 +0000 UTC m=+0.084541683 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 04:54:56 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:54:57 localhost nova_compute[281854]: 2025-12-02 09:54:57.382 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:54:58 localhost nova_compute[281854]: 2025-12-02 09:54:58.588 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:00 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:00 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3) Dec 2 04:55:00 localhost ceph-mon[289473]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 (MON_DOWN) Dec 2 04:55:00 localhost ceph-mon[289473]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 Dec 2 04:55:00 localhost ceph-mon[289473]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 Dec 2 04:55:00 localhost ceph-mon[289473]: mon.np0005541912 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Dec 2 04:55:00 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:00 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[289473]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:01 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:01 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:01 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:01 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:02 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:55:02 localhost nova_compute[281854]: 2025-12-02 09:55:02.385 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:02 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:55:02 localhost ceph-mon[289473]: paxos.3).electionLogic(40) init, last seen epoch 40 Dec 2 04:55:02 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:02 localhost ceph-mon[289473]: mon.np0005541913@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:02 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:02 localhost podman[293528]: 2025-12-02 09:55:02.44521396 +0000 UTC m=+0.083316201 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 04:55:02 localhost podman[293528]: 2025-12-02 09:55:02.480037692 +0000 UTC m=+0.118139873 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:55:02 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:55:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:55:03.040 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:55:03.040 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:55:03.042 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541912 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:03 localhost ceph-mon[289473]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541910 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541912 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:55:03 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4) Dec 2 04:55:03 localhost ceph-mon[289473]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913) Dec 2 04:55:03 localhost ceph-mon[289473]: Cluster is now healthy Dec 2 04:55:03 localhost ceph-mon[289473]: overall HEALTH_OK Dec 2 04:55:03 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:03 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:03 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:03 localhost nova_compute[281854]: 2025-12-02 09:55:03.590 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:04 localhost openstack_network_exporter[242845]: ERROR 09:55:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:55:04 localhost openstack_network_exporter[242845]: ERROR 09:55:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:04 localhost openstack_network_exporter[242845]: ERROR 09:55:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:04 localhost openstack_network_exporter[242845]: ERROR 09:55:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:55:04 localhost openstack_network_exporter[242845]: Dec 2 04:55:04 localhost openstack_network_exporter[242845]: ERROR 09:55:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:55:04 localhost openstack_network_exporter[242845]: Dec 2 04:55:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:55:04 localhost ceph-mon[289473]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:04 localhost ceph-mon[289473]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:04 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:04 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:04 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:04 localhost podman[293546]: 2025-12-02 09:55:04.429814503 +0000 UTC m=+0.068470263 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:55:04 localhost podman[293546]: 2025-12-02 09:55:04.444946938 +0000 UTC m=+0.083602718 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, version=9.6) Dec 2 04:55:04 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:55:05 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:05 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:05 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:06 localhost podman[240799]: time="2025-12-02T09:55:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:55:06 localhost podman[240799]: @ - - [02/Dec/2025:09:55:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:55:06 localhost podman[240799]: @ - - [02/Dec/2025:09:55:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18706 "" "Go-http-client/1.1" Dec 2 04:55:06 localhost podman[293618]: Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.420685313 +0000 UTC m=+0.079476317 container create 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:06 localhost systemd[1]: Started libpod-conmon-61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61.scope. Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.388133492 +0000 UTC m=+0.046924516 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:06 localhost systemd[1]: Started libcrun container. Dec 2 04:55:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.519242171 +0000 UTC m=+0.178033145 container init 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph) Dec 2 04:55:06 localhost systemd[1]: tmp-crun.kbVurg.mount: Deactivated successfully. Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.53751419 +0000 UTC m=+0.196305164 container start 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.538552948 +0000 UTC m=+0.197343922 container attach 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:55:06 localhost elegant_bose[293634]: 167 167 Dec 2 04:55:06 localhost systemd[1]: libpod-61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61.scope: Deactivated successfully. Dec 2 04:55:06 localhost podman[293618]: 2025-12-02 09:55:06.543657155 +0000 UTC m=+0.202448159 container died 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, distribution-scope=public, io.openshift.expose-services=, name=rhceph, vcs-type=git, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:06 localhost podman[293637]: 2025-12-02 09:55:06.601425011 +0000 UTC m=+0.086759793 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:55:06 localhost podman[293637]: 2025-12-02 09:55:06.614941092 +0000 UTC m=+0.100275844 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:55:06 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:55:06 localhost podman[293650]: 2025-12-02 09:55:06.704226991 +0000 UTC m=+0.143315966 container remove 61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_bose, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:06 localhost systemd[1]: libpod-conmon-61feebfcfc6f867124b867e58fe5b8f804ab18b8e9b4f2489f6e029af09a0b61.scope: Deactivated successfully. Dec 2 04:55:06 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:06 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:06 localhost ceph-mon[289473]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:06 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:06 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:55:07 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:07 localhost nova_compute[281854]: 2025-12-02 09:55:07.387 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:07 localhost systemd[1]: var-lib-containers-storage-overlay-9f955240051d03777a505466749a19601f44f417af121f0dcc452e65d8616616-merged.mount: Deactivated successfully. Dec 2 04:55:07 localhost podman[293732]: Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.472550154 +0000 UTC m=+0.089681631 container create 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:07 localhost systemd[1]: Started libpod-conmon-678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646.scope. Dec 2 04:55:07 localhost systemd[1]: Started libcrun container. Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.43913228 +0000 UTC m=+0.056263827 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.542600809 +0000 UTC m=+0.159732306 container init 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, version=7) Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.555083873 +0000 UTC m=+0.172215400 container start 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.555686788 +0000 UTC m=+0.172818295 container attach 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:55:07 localhost affectionate_swanson[293746]: 167 167 Dec 2 04:55:07 localhost podman[293732]: 2025-12-02 09:55:07.575841688 +0000 UTC m=+0.192973205 container died 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:07 localhost systemd[1]: libpod-678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646.scope: Deactivated successfully. Dec 2 04:55:07 localhost podman[293751]: 2025-12-02 09:55:07.689257353 +0000 UTC m=+0.098587099 container remove 678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_swanson, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, name=rhceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:07 localhost systemd[1]: libpod-conmon-678d54a6eca58f9b08c69d52db1d073ca5850d93bc2f570eae232b4cba262646.scope: Deactivated successfully. Dec 2 04:55:07 localhost ceph-mon[289473]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:07 localhost ceph-mon[289473]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:07 localhost ceph-mon[289473]: Reconfig service osd.default_drive_group Dec 2 04:55:07 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:07 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:07 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost systemd[1]: var-lib-containers-storage-overlay-849e1f0b64969c4c08c1f26ff847021db1cd8ae7309d369f35cd8de39c3701ce-merged.mount: Deactivated successfully. Dec 2 04:55:08 localhost podman[293827]: Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.595125916 +0000 UTC m=+0.070063796 container create d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, release=1763362218, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7) Dec 2 04:55:08 localhost nova_compute[281854]: 2025-12-02 09:55:08.593 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:08 localhost systemd[1]: Started libpod-conmon-d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6.scope. Dec 2 04:55:08 localhost systemd[1]: Started libcrun container. Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.56495831 +0000 UTC m=+0.039896270 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.663663401 +0000 UTC m=+0.138601321 container init d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, architecture=x86_64) Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.677100881 +0000 UTC m=+0.152038791 container start d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.677368248 +0000 UTC m=+0.152306198 container attach d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Dec 2 04:55:08 localhost stoic_zhukovsky[293842]: 167 167 Dec 2 04:55:08 localhost systemd[1]: libpod-d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6.scope: Deactivated successfully. Dec 2 04:55:08 localhost podman[293827]: 2025-12-02 09:55:08.682938457 +0000 UTC m=+0.157876367 container died d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, release=1763362218, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=) Dec 2 04:55:08 localhost podman[293847]: 2025-12-02 09:55:08.774795814 +0000 UTC m=+0.083251328 container remove d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_zhukovsky, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:08 localhost systemd[1]: libpod-conmon-d784826d7cab897b10e1d7325f678ac5523e09a4bcae33d290e7c0e44a390eb6.scope: Deactivated successfully. Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:55:08 localhost ceph-mon[289473]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:08 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost systemd[1]: var-lib-containers-storage-overlay-c4f2212c0e5df612c2a81411783f5e19957d870cc9113eadc8fd50dfa0cd686b-merged.mount: Deactivated successfully. Dec 2 04:55:09 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 e87: 6 total, 6 up, 6 in Dec 2 04:55:09 localhost podman[293923]: Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.622746187 +0000 UTC m=+0.078547643 container create 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True) Dec 2 04:55:09 localhost systemd[1]: Started libpod-conmon-76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43.scope. Dec 2 04:55:09 localhost systemd-logind[757]: Session 65 logged out. Waiting for processes to exit. Dec 2 04:55:09 localhost systemd[1]: Started libcrun container. Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.594310346 +0000 UTC m=+0.050111832 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.704992487 +0000 UTC m=+0.160793923 container init 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.71593841 +0000 UTC m=+0.171739866 container start 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public) Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.716215678 +0000 UTC m=+0.172017114 container attach 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, version=7) Dec 2 04:55:09 localhost happy_gould[293938]: 167 167 Dec 2 04:55:09 localhost systemd[1]: libpod-76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43.scope: Deactivated successfully. Dec 2 04:55:09 localhost podman[293923]: 2025-12-02 09:55:09.718948911 +0000 UTC m=+0.174750347 container died 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main) Dec 2 04:55:09 localhost podman[293943]: 2025-12-02 09:55:09.830230939 +0000 UTC m=+0.096524314 container remove 76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_gould, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:55:09 localhost systemd[1]: libpod-conmon-76b0abb565f877404435e1c7109f89385b9711965607944334c280e2dca01d43.scope: Deactivated successfully. Dec 2 04:55:09 localhost systemd[1]: session-65.scope: Deactivated successfully. Dec 2 04:55:09 localhost systemd[1]: session-65.scope: Consumed 19.772s CPU time. Dec 2 04:55:09 localhost systemd-logind[757]: Removed session 65. Dec 2 04:55:09 localhost sshd[293959]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='client.? 172.18.0.200:0/2202206912' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: Activating manager daemon np0005541914.lljzmk Dec 2 04:55:09 localhost ceph-mon[289473]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:55:09 localhost ceph-mon[289473]: Manager daemon np0005541914.lljzmk is now available Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 04:55:09 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 04:55:10 localhost systemd-logind[757]: New session 66 of user ceph-admin. Dec 2 04:55:10 localhost systemd[1]: Started Session 66 of User ceph-admin. Dec 2 04:55:10 localhost systemd[1]: var-lib-containers-storage-overlay-8401a8d1babdb6458ecb02658dbc12bf4dd9c1ed37ac5282f3d398e501423c8a-merged.mount: Deactivated successfully. Dec 2 04:55:10 localhost ceph-mon[289473]: removing stray HostCache host record np0005541909.localdomain.devices.0 Dec 2 04:55:11 localhost podman[294069]: 2025-12-02 09:55:11.042881948 +0000 UTC m=+0.076453707 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, release=1763362218) Dec 2 04:55:11 localhost podman[294069]: 2025-12-02 09:55:11.128981642 +0000 UTC m=+0.162553371 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-type=git, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64) Dec 2 04:55:11 localhost ceph-mon[289473]: [02/Dec/2025:09:55:10] ENGINE Bus STARTING Dec 2 04:55:11 localhost ceph-mon[289473]: [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765 Dec 2 04:55:11 localhost ceph-mon[289473]: [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150 Dec 2 04:55:12 localhost ceph-mon[289473]: [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:55:12 localhost ceph-mon[289473]: [02/Dec/2025:09:55:10] ENGINE Bus STARTED Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:12 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:12 localhost nova_compute[281854]: 2025-12-02 09:55:12.391 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:55:12 localhost systemd[1]: tmp-crun.sIodTz.mount: Deactivated successfully. Dec 2 04:55:12 localhost podman[294278]: 2025-12-02 09:55:12.79045736 +0000 UTC m=+0.112464260 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3) Dec 2 04:55:12 localhost podman[294278]: 2025-12-02 09:55:12.828462937 +0000 UTC m=+0.150469847 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true) Dec 2 04:55:12 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:55:13 localhost nova_compute[281854]: 2025-12-02 09:55:13.596 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:14 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:55:14 localhost ceph-mon[289473]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:55:14 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:14 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:55:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:55:15 localhost systemd[1]: tmp-crun.V99SdH.mount: Deactivated successfully. Dec 2 04:55:15 localhost podman[294813]: 2025-12-02 09:55:15.161505086 +0000 UTC m=+0.106215153 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:55:15 localhost podman[294813]: 2025-12-02 09:55:15.200050917 +0000 UTC m=+0.144760964 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:55:15 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:55:15 localhost podman[294815]: 2025-12-02 09:55:15.247374864 +0000 UTC m=+0.190845598 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 04:55:15 localhost podman[294815]: 2025-12-02 09:55:15.333187911 +0000 UTC m=+0.276658635 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:55:15 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:15 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:16 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:16 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:17 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:17 localhost nova_compute[281854]: 2025-12-02 09:55:17.395 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:17 localhost ceph-mon[289473]: Reconfiguring crash.np0005541910 (monmap changed)... Dec 2 04:55:17 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain Dec 2 04:55:17 localhost ceph-mon[289473]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 2 04:55:17 localhost ceph-mon[289473]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 2 04:55:17 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:17 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:17 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:17 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:18 localhost nova_compute[281854]: 2025-12-02 09:55:18.598 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:19 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:19 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:19 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:19 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:19 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:19 localhost ceph-mon[289473]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:21 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:21 localhost ceph-mon[289473]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:22 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:22 localhost nova_compute[281854]: 2025-12-02 09:55:22.398 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:23 localhost nova_compute[281854]: 2025-12-02 09:55:23.600 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[289473]: Saving service mon spec with placement label:mon Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:25 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:26 localhost ceph-mon[289473]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:26 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:26 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:26 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:26 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:26 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:26 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:26 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:26 localhost podman[295095]: Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.300320134 +0000 UTC m=+0.085884538 container create ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 2 04:55:26 localhost systemd[1]: Started libpod-conmon-ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e.scope. Dec 2 04:55:26 localhost systemd[1]: Started libcrun container. Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.269463819 +0000 UTC m=+0.055028273 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.376095423 +0000 UTC m=+0.161659857 container init ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.384652072 +0000 UTC m=+0.170216506 container start ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.384927569 +0000 UTC m=+0.170492033 container attach ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True) Dec 2 04:55:26 localhost condescending_brahmagupta[295110]: 167 167 Dec 2 04:55:26 localhost systemd[1]: libpod-ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e.scope: Deactivated successfully. Dec 2 04:55:26 localhost podman[295095]: 2025-12-02 09:55:26.391444153 +0000 UTC m=+0.177008627 container died ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git) Dec 2 04:55:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:55:26 localhost podman[295115]: 2025-12-02 09:55:26.509187004 +0000 UTC m=+0.104144788 container remove ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_brahmagupta, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:55:26 localhost systemd[1]: libpod-conmon-ead4efb670ba7986942a41e576112c87a7eccfb14a8585f9a2c97bb44531a16e.scope: Deactivated successfully. Dec 2 04:55:26 localhost podman[295127]: 2025-12-02 09:55:26.595089113 +0000 UTC m=+0.101089086 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:55:26 localhost podman[295127]: 2025-12-02 09:55:26.609054177 +0000 UTC m=+0.115054200 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:55:26 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:55:27 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:27 localhost podman[295204]: Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.251121087 +0000 UTC m=+0.070075086 container create a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, com.redhat.component=rhceph-container) Dec 2 04:55:27 localhost systemd[1]: Started libpod-conmon-a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037.scope. Dec 2 04:55:27 localhost systemd[1]: Started libcrun container. Dec 2 04:55:27 localhost systemd[1]: var-lib-containers-storage-overlay-0c7c9c2f61f9960fbf47d176668d7d64f15b6a30c632a994c2abcbd0ac2504cf-merged.mount: Deactivated successfully. Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.31474851 +0000 UTC m=+0.133702479 container init a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.32111155 +0000 UTC m=+0.140065519 container start a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1763362218, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.321333566 +0000 UTC m=+0.140287535 container attach a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:27 localhost vibrant_varahamihira[295219]: 167 167 Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.226413426 +0000 UTC m=+0.045367405 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:27 localhost podman[295204]: 2025-12-02 09:55:27.340201711 +0000 UTC m=+0.159155670 container died a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 2 04:55:27 localhost systemd[1]: libpod-a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037.scope: Deactivated successfully. Dec 2 04:55:27 localhost systemd[1]: var-lib-containers-storage-overlay-754a0a7cbc31610f7bd21c041c3b1e8e5eaf84ca00398450a433ed70430c1ab4-merged.mount: Deactivated successfully. Dec 2 04:55:27 localhost nova_compute[281854]: 2025-12-02 09:55:27.401 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:27 localhost podman[295224]: 2025-12-02 09:55:27.413767609 +0000 UTC m=+0.068638138 container remove a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_varahamihira, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, architecture=x86_64, io.buildah.version=1.41.4) Dec 2 04:55:27 localhost systemd[1]: libpod-conmon-a64a010d249d0cac800180d92e2630489433cad8ead76612fc0a312da23c7037.scope: Deactivated successfully. Dec 2 04:55:27 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e9 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:55:27 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/555242505' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:27 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:27 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:28 localhost podman[295293]: Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.106026123 +0000 UTC m=+0.080736881 container create 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:55:28 localhost systemd[1]: Started libpod-conmon-5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a.scope. Dec 2 04:55:28 localhost systemd[1]: Started libcrun container. Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.07302111 +0000 UTC m=+0.047731918 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.174073684 +0000 UTC m=+0.148784452 container init 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main) Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.182641763 +0000 UTC m=+0.157352521 container start 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container) Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.183697392 +0000 UTC m=+0.158408130 container attach 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, name=rhceph, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:55:28 localhost eloquent_sanderson[295308]: 167 167 Dec 2 04:55:28 localhost systemd[1]: libpod-5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a.scope: Deactivated successfully. Dec 2 04:55:28 localhost podman[295293]: 2025-12-02 09:55:28.186221009 +0000 UTC m=+0.160931837 container died 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 2 04:55:28 localhost podman[295314]: 2025-12-02 09:55:28.276922256 +0000 UTC m=+0.079684193 container remove 5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_sanderson, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., ceph=True, version=7, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:28 localhost systemd[1]: libpod-conmon-5dfa9f0cd212cda7f4431fa5349aeb70d7991e8c29b63aec1b24d41de6d5785a.scope: Deactivated successfully. Dec 2 04:55:28 localhost nova_compute[281854]: 2025-12-02 09:55:28.603 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:28 localhost ceph-mon[289473]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:28 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:28 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:28 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:28 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:28 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:29 localhost ceph-mon[289473]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:29 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:29 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:29 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:29 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:30 localhost ceph-mon[289473]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:30 localhost ceph-mon[289473]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:30 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:30 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:31 localhost ceph-mon[289473]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:31 localhost ceph-mon[289473]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:31 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:32 localhost ceph-mon[289473]: mon.np0005541913@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:32 localhost nova_compute[281854]: 2025-12-02 09:55:32.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:32 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:32 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:32 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:32 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:32 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:32 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:55:33 localhost podman[295331]: 2025-12-02 09:55:33.468359478 +0000 UTC m=+0.098969248 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:55:33 localhost podman[295331]: 2025-12-02 09:55:33.477058971 +0000 UTC m=+0.107668751 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 04:55:33 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:55:33 localhost nova_compute[281854]: 2025-12-02 09:55:33.606 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:33 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:33 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:33 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:33 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:33 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:34 localhost openstack_network_exporter[242845]: ERROR 09:55:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:55:34 localhost openstack_network_exporter[242845]: ERROR 09:55:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:34 localhost openstack_network_exporter[242845]: ERROR 09:55:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:55:34 localhost openstack_network_exporter[242845]: ERROR 09:55:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:55:34 localhost openstack_network_exporter[242845]: Dec 2 04:55:34 localhost openstack_network_exporter[242845]: ERROR 09:55:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:55:34 localhost openstack_network_exporter[242845]: Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.396346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334396411, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 265, "total_data_size": 8432363, "memory_usage": 9082592, "flush_reason": "Manual Compaction"} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334426644, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4802871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13710, "largest_seqno": 16061, "table_properties": {"data_size": 4793637, "index_size": 5483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23848, "raw_average_key_size": 22, "raw_value_size": 4773312, "raw_average_value_size": 4456, "num_data_blocks": 229, "num_entries": 1071, "num_filter_entries": 1071, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669289, "oldest_key_time": 1764669289, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30348 microseconds, and 10975 cpu microseconds. Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.426693) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4802871 bytes OK Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.426720) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.428616) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.428634) EVENT_LOG_v1 {"time_micros": 1764669334428629, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.428674) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 8420670, prev total WAL file size 8420670, number of live WAL files 2. Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.429687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303238' seq:72057594037927935, type:22 .. '6B760031323930' seq:0, type:0; will stop at (end) Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4690KB)], [21(12MB)] Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334429722, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 17571805, "oldest_snapshot_seqno": -1} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10249 keys, 16745842 bytes, temperature: kUnknown Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334546174, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16745842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16684060, "index_size": 35057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 275342, "raw_average_key_size": 26, "raw_value_size": 16505205, "raw_average_value_size": 1610, "num_data_blocks": 1339, "num_entries": 10249, "num_filter_entries": 10249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.546597) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16745842 bytes Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.548835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.7 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.2 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 10739, records dropped: 490 output_compression: NoCompression Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.548872) EVENT_LOG_v1 {"time_micros": 1764669334548856, "job": 10, "event": "compaction_finished", "compaction_time_micros": 116589, "compaction_time_cpu_micros": 23488, "output_level": 6, "num_output_files": 1, "total_output_size": 16745842, "num_input_records": 10739, "num_output_records": 10249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334549765, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334551655, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.429578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.551807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.551817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.551823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.551827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:34.551831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:34 localhost ceph-mon[289473]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:34 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:34 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:35 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503730f20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 2 04:55:35 localhost ceph-mon[289473]: mon.np0005541913@3(peon) e10 my rank is now 2 (was 3) Dec 2 04:55:35 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:55:35 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 2 04:55:35 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x56450d29a000 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0 Dec 2 04:55:35 localhost ceph-mon[289473]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:55:35 localhost ceph-mon[289473]: paxos.2).electionLogic(42) init, last seen epoch 42 Dec 2 04:55:35 localhost ceph-mon[289473]: mon.np0005541913@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:35 localhost ceph-mon[289473]: mon.np0005541913@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:55:35 localhost systemd[1]: tmp-crun.tRPmeM.mount: Deactivated successfully. Dec 2 04:55:35 localhost podman[295367]: 2025-12-02 09:55:35.459834528 +0000 UTC m=+0.097631753 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:55:35 localhost podman[295367]: 2025-12-02 09:55:35.475170329 +0000 UTC m=+0.112967584 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter) Dec 2 04:55:35 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:55:36 localhost podman[240799]: time="2025-12-02T09:55:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:55:36 localhost podman[240799]: @ - - [02/Dec/2025:09:55:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:55:36 localhost podman[240799]: @ - - [02/Dec/2025:09:55:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18724 "" "Go-http-client/1.1" Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541913@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541913@2(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:55:37 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain Dec 2 04:55:37 localhost ceph-mon[289473]: Remove daemons mon.np0005541910 Dec 2 04:55:37 localhost ceph-mon[289473]: Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912']) Dec 2 04:55:37 localhost ceph-mon[289473]: Removing monitor np0005541910 from monmap... Dec 2 04:55:37 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541910"} : dispatch Dec 2 04:55:37 localhost ceph-mon[289473]: Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports [] Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541912 calling monitor election Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541913 calling monitor election Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541914 calling monitor election Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541911 calling monitor election Dec 2 04:55:37 localhost ceph-mon[289473]: mon.np0005541911 is new leader, mons np0005541911,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3) Dec 2 04:55:37 localhost ceph-mon[289473]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[289473]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[289473]: stray daemon mgr.np0005541909.kfesnk on host np0005541909.localdomain not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[289473]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 04:55:37 localhost ceph-mon[289473]: stray host np0005541909.localdomain has 1 stray daemons: ['mgr.np0005541909.kfesnk'] Dec 2 04:55:37 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:55:37 localhost nova_compute[281854]: 2025-12-02 09:55:37.407 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:37 localhost podman[295386]: 2025-12-02 09:55:37.415132501 +0000 UTC m=+0.061565399 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:55:37 localhost podman[295386]: 2025-12-02 09:55:37.420120054 +0000 UTC m=+0.066552902 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:55:37 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:55:38 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)... Dec 2 04:55:38 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:38 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:38 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain Dec 2 04:55:38 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:38 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:38 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:38 localhost nova_compute[281854]: 2025-12-02 09:55:38.609 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:39 localhost ceph-mon[289473]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:55:39 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:55:39 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[289473]: Removed label mon from host np0005541910.localdomain Dec 2 04:55:39 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:39 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:55:39 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:39 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:39 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.855294) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340855344, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 528, "num_deletes": 251, "total_data_size": 502557, "memory_usage": 512496, "flush_reason": "Manual Compaction"} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340859972, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 315167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16067, "largest_seqno": 16589, "table_properties": {"data_size": 312185, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8645, "raw_average_key_size": 21, "raw_value_size": 305665, "raw_average_value_size": 764, "num_data_blocks": 37, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669334, "oldest_key_time": 1764669334, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4723 microseconds, and 1593 cpu microseconds. Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.860018) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 315167 bytes OK Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.860043) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861320) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861344) EVENT_LOG_v1 {"time_micros": 1764669340861337, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 499246, prev total WAL file size 499246, number of live WAL files 2. Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861978) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(307KB)], [24(15MB)] Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340862044, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 17061009, "oldest_snapshot_seqno": -1} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10122 keys, 14985663 bytes, temperature: kUnknown Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340965808, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 14985663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14926440, "index_size": 32818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 273589, "raw_average_key_size": 27, "raw_value_size": 14751431, "raw_average_value_size": 1457, "num_data_blocks": 1241, "num_entries": 10122, "num_filter_entries": 10122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669201, "oldest_key_time": 0, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9d4cd30-d7e1-42a3-a4ff-e4bd7db629d9", "db_session_id": "OW4D0W92HOAH7R2F6LZX", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.966639) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 14985663 bytes Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.969715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(101.7) write-amplify(47.5) OK, records in: 10649, records dropped: 527 output_compression: NoCompression Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.969758) EVENT_LOG_v1 {"time_micros": 1764669340969738, "job": 12, "event": "compaction_finished", "compaction_time_micros": 104268, "compaction_time_cpu_micros": 27768, "output_level": 6, "num_output_files": 1, "total_output_size": 14985663, "num_input_records": 10649, "num_output_records": 10122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340970525, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340973659, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.973932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.973941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.973944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.973947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: rocksdb: (Original Log Time 2025/12/02-09:55:40.973950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[289473]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:40 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:42 localhost ceph-mon[289473]: Removed label mgr from host np0005541910.localdomain Dec 2 04:55:42 localhost ceph-mon[289473]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:55:42 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:55:42 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:42 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:55:42 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:42 localhost nova_compute[281854]: 2025-12-02 09:55:42.409 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:43 localhost ceph-mon[289473]: Removed label _admin from host np0005541910.localdomain Dec 2 04:55:43 localhost ceph-mon[289473]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:55:43 localhost ceph-mon[289473]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:55:43 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:55:43 localhost podman[295410]: 2025-12-02 09:55:43.43817216 +0000 UTC m=+0.079431456 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:55:43 localhost podman[295410]: 2025-12-02 09:55:43.4811638 +0000 UTC m=+0.122423106 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 04:55:43 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:55:43 localhost nova_compute[281854]: 2025-12-02 09:55:43.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:44 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:44 localhost ceph-mon[289473]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:55:44 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:55:44 localhost ceph-mon[289473]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:55:45 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:45 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:45 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:55:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:55:45 localhost podman[295430]: 2025-12-02 09:55:45.445878274 +0000 UTC m=+0.085233872 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:55:45 localhost podman[295430]: 2025-12-02 09:55:45.454474184 +0000 UTC m=+0.093829792 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:55:45 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:55:45 localhost podman[295431]: 2025-12-02 09:55:45.500754042 +0000 UTC m=+0.132631240 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:55:45 localhost podman[295431]: 2025-12-02 09:55:45.539073997 +0000 UTC m=+0.170951185 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller) Dec 2 04:55:45 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.035 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.076 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.077 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.077 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:55:46 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:46 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:46 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:55:46 localhost nova_compute[281854]: 2025-12-02 09:55:46.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:55:47 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:47 localhost nova_compute[281854]: 2025-12-02 09:55:47.413 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:47 localhost podman[295530]: Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.427515209 +0000 UTC m=+0.044386679 container create 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-type=git, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:47 localhost systemd[1]: Started libpod-conmon-2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d.scope. Dec 2 04:55:47 localhost systemd[1]: Started libcrun container. Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.474123607 +0000 UTC m=+0.090995107 container init 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:47 localhost systemd[1]: tmp-crun.UHRNTI.mount: Deactivated successfully. Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.485229214 +0000 UTC m=+0.102100694 container start 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, distribution-scope=public, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True) Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.485787229 +0000 UTC m=+0.102658719 container attach 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:55:47 localhost keen_morse[295545]: 167 167 Dec 2 04:55:47 localhost systemd[1]: libpod-2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d.scope: Deactivated successfully. Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.488693956 +0000 UTC m=+0.105565476 container died 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:47 localhost podman[295530]: 2025-12-02 09:55:47.408075429 +0000 UTC m=+0.024946949 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:47 localhost podman[295550]: 2025-12-02 09:55:47.577962045 +0000 UTC m=+0.079558710 container remove 2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_morse, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4) Dec 2 04:55:47 localhost systemd[1]: libpod-conmon-2354d7bcabc3b73860c9c09c50b67f4e3bad2b3d20ee578e0110c1c5c8aafb6d.scope: Deactivated successfully. Dec 2 04:55:47 localhost nova_compute[281854]: 2025-12-02 09:55:47.635 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:55:47 localhost nova_compute[281854]: 2025-12-02 09:55:47.637 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:55:47 localhost nova_compute[281854]: 2025-12-02 09:55:47.637 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:55:47 localhost nova_compute[281854]: 2025-12-02 09:55:47.637 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:55:47 localhost ceph-mon[289473]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:55:47 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[289473]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:47 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:47 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:55:48 localhost podman[295619]: Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.271464102 +0000 UTC m=+0.072790369 container create cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4) Dec 2 04:55:48 localhost systemd[1]: Started libpod-conmon-cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721.scope. Dec 2 04:55:48 localhost systemd[1]: Started libcrun container. Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.333728879 +0000 UTC m=+0.135055146 container init cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64) Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.241766048 +0000 UTC m=+0.043092365 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.345215276 +0000 UTC m=+0.146541553 container start cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, CEPH_POINT_RELEASE=, version=7, name=rhceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.345555585 +0000 UTC m=+0.146881892 container attach cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, release=1763362218, vendor=Red Hat, Inc., vcs-type=git) Dec 2 04:55:48 localhost nifty_turing[295635]: 167 167 Dec 2 04:55:48 localhost systemd[1]: libpod-cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721.scope: Deactivated successfully. Dec 2 04:55:48 localhost podman[295619]: 2025-12-02 09:55:48.350806405 +0000 UTC m=+0.152132742 container died cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., version=7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:55:48 localhost systemd[1]: var-lib-containers-storage-overlay-598a1355734ee37431373316f6aecb5fb85e77592d26354a91db8aff75419954-merged.mount: Deactivated successfully. Dec 2 04:55:48 localhost systemd[1]: var-lib-containers-storage-overlay-b68deb4f9eb5f1eb957711509d66cf8879be8fad83b7c950899bf065c890600c-merged.mount: Deactivated successfully. Dec 2 04:55:48 localhost podman[295640]: 2025-12-02 09:55:48.464964389 +0000 UTC m=+0.100801437 container remove cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_turing, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:48 localhost systemd[1]: libpod-conmon-cb6a64f2e5ddf5cfc625b3f9958202481bb5fae5e3eaae79b3e72ba1d99a6721.scope: Deactivated successfully. Dec 2 04:55:48 localhost nova_compute[281854]: 2025-12-02 09:55:48.643 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:48 localhost nova_compute[281854]: 2025-12-02 09:55:48.761 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:55:48 localhost ceph-mon[289473]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:55:48 localhost ceph-mon[289473]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:55:48 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:48 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:48 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.263 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.264 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.265 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.265 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.266 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.266 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost podman[295717]: Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.386225231 +0000 UTC m=+0.079897398 container create 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main) Dec 2 04:55:49 localhost systemd[1]: Started libpod-conmon-0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17.scope. Dec 2 04:55:49 localhost systemd[1]: Started libcrun container. Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.352894379 +0000 UTC m=+0.046566586 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.456679888 +0000 UTC m=+0.150352055 container init 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, io.openshift.expose-services=, release=1763362218, GIT_BRANCH=main, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:49 localhost systemd[1]: tmp-crun.dgPeU2.mount: Deactivated successfully. Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.469252263 +0000 UTC m=+0.162924430 container start 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhceph, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main) Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.470155277 +0000 UTC m=+0.163827444 container attach 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=) Dec 2 04:55:49 localhost pedantic_saha[295732]: 167 167 Dec 2 04:55:49 localhost systemd[1]: libpod-0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17.scope: Deactivated successfully. Dec 2 04:55:49 localhost podman[295717]: 2025-12-02 09:55:49.472453879 +0000 UTC m=+0.166126076 container died 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4) Dec 2 04:55:49 localhost podman[295737]: 2025-12-02 09:55:49.568780897 +0000 UTC m=+0.087323928 container remove 0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_saha, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, architecture=x86_64) Dec 2 04:55:49 localhost systemd[1]: libpod-conmon-0a5f6dcc5993292ab514244312b8d5e7e1e1bd6e610650d9be8f48dcd1286e17.scope: Deactivated successfully. Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost nova_compute[281854]: 2025-12-02 09:55:49.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:55:49 localhost ceph-mon[289473]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:55:49 localhost ceph-mon[289473]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:55:49 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:49 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:49 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:49 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:50 localhost podman[295812]: Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.352020745 +0000 UTC m=+0.058561528 container create 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=) Dec 2 04:55:50 localhost systemd[1]: Started libpod-conmon-0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c.scope. Dec 2 04:55:50 localhost systemd[1]: Started libcrun container. Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.322025552 +0000 UTC m=+0.028566385 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.426707314 +0000 UTC m=+0.133248117 container init 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, version=7, release=1763362218, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Dec 2 04:55:50 localhost brave_chaplygin[295829]: 167 167 Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.436577307 +0000 UTC m=+0.143118070 container start 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1763362218, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True) Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.436913436 +0000 UTC m=+0.143454219 container attach 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:55:50 localhost systemd[1]: var-lib-containers-storage-overlay-0e963a5f3fd7734119a998d76ee4d1922f739a9a8ee7d9bb35b866259cbcf348-merged.mount: Deactivated successfully. Dec 2 04:55:50 localhost podman[295812]: 2025-12-02 09:55:50.439032443 +0000 UTC m=+0.145573206 container died 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, ceph=True) Dec 2 04:55:50 localhost systemd[1]: libpod-0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c.scope: Deactivated successfully. Dec 2 04:55:50 localhost systemd[1]: tmp-crun.8iFvgG.mount: Deactivated successfully. Dec 2 04:55:50 localhost systemd[1]: var-lib-containers-storage-overlay-d2f6381695e455ae5f450c39646c3471924a3560fa9c693a84670ab1552b1462-merged.mount: Deactivated successfully. Dec 2 04:55:50 localhost podman[295834]: 2025-12-02 09:55:50.547540257 +0000 UTC m=+0.099655188 container remove 0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_chaplygin, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, CEPH_POINT_RELEASE=) Dec 2 04:55:50 localhost systemd[1]: libpod-conmon-0730104ba80701935ef0a87e1961e49eed72f6d8b662022cd4c070cfdbb22e8c.scope: Deactivated successfully. Dec 2 04:55:50 localhost nova_compute[281854]: 2025-12-02 09:55:50.779 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:50 localhost nova_compute[281854]: 2025-12-02 09:55:50.780 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:50 localhost nova_compute[281854]: 2025-12-02 09:55:50.780 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:50 localhost nova_compute[281854]: 2025-12-02 09:55:50.780 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:55:50 localhost nova_compute[281854]: 2025-12-02 09:55:50.780 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:55:50 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:55:50 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:55:50 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:50 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:50 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:50 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.210 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:55:51 localhost podman[295925]: Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.230457501 +0000 UTC m=+0.049929197 container create 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:51 localhost systemd[1]: Started libpod-conmon-9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93.scope. Dec 2 04:55:51 localhost systemd[1]: Started libcrun container. Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.281772884 +0000 UTC m=+0.101244610 container init 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64) Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.28909191 +0000 UTC m=+0.108563636 container start 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.289400828 +0000 UTC m=+0.108872594 container attach 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218) Dec 2 04:55:51 localhost hopeful_liskov[295942]: 167 167 Dec 2 04:55:51 localhost systemd[1]: libpod-9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93.scope: Deactivated successfully. Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.292911162 +0000 UTC m=+0.112382888 container died 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z) Dec 2 04:55:51 localhost podman[295925]: 2025-12-02 09:55:51.207035974 +0000 UTC m=+0.026507760 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:51 localhost podman[295947]: 2025-12-02 09:55:51.370035926 +0000 UTC m=+0.071920005 container remove 9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hopeful_liskov, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:55:51 localhost systemd[1]: libpod-conmon-9f6c936d2862ba2d1a3be9da3f5b74a95a35d36db3ca9f374e3cd246fcfeef93.scope: Deactivated successfully. Dec 2 04:55:51 localhost systemd[1]: var-lib-containers-storage-overlay-6ca21786009132d6c533cd4d1bb77f991c80d86d138baa779c46181907fdc0f3-merged.mount: Deactivated successfully. Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.603 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.604 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.841 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.842 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11668MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.843 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.843 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:55:51 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:55:51 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:55:51 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:51 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:51 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.943 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.945 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.945 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:55:51 localhost nova_compute[281854]: 2025-12-02 09:55:51.998 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:55:52 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:52 localhost ceph-mon[289473]: mon.np0005541913@2(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:55:52 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/9569723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:55:52 localhost podman[296019]: Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.101686674 +0000 UTC m=+0.067834127 container create a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, architecture=x86_64) Dec 2 04:55:52 localhost systemd[1]: Started libpod-conmon-a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4.scope. Dec 2 04:55:52 localhost systemd[1]: Started libcrun container. Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.17108542 +0000 UTC m=+0.137232903 container init a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.41.4, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.076633444 +0000 UTC m=+0.042780967 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.180979616 +0000 UTC m=+0.147127089 container start a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.181177351 +0000 UTC m=+0.147324824 container attach a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, RELEASE=main, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z) Dec 2 04:55:52 localhost pedantic_grothendieck[296054]: 167 167 Dec 2 04:55:52 localhost systemd[1]: libpod-a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4.scope: Deactivated successfully. Dec 2 04:55:52 localhost podman[296019]: 2025-12-02 09:55:52.185050064 +0000 UTC m=+0.151197507 container died a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 2 04:55:52 localhost podman[296059]: 2025-12-02 09:55:52.285732248 +0000 UTC m=+0.092056413 container remove a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_grothendieck, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux ) Dec 2 04:55:52 localhost systemd[1]: libpod-conmon-a5246e21c4743b64fe0c4444cd005b219a7d35abbfff7de396b9f36931f71ad4.scope: Deactivated successfully. Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.415 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:52 localhost systemd[1]: var-lib-containers-storage-overlay-3ea84b6acdc75ff1523e06148e5bb1bdea47824bbb621ca51e2223aa345cfe82-merged.mount: Deactivated successfully. Dec 2 04:55:52 localhost ceph-mon[289473]: mon.np0005541913@2(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:55:52 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1371164466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.458 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.465 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.522 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.525 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:55:52 localhost nova_compute[281854]: 2025-12-02 09:55:52.525 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:55:52 localhost ceph-mon[289473]: mon.np0005541913@2(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:55:52 localhost ceph-mon[289473]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/533854766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:55:52 localhost ceph-mon[289473]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:55:52 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:55:52 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:52 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:52 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:52 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:55:53 localhost nova_compute[281854]: 2025-12-02 09:55:53.646 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:53 localhost ceph-mon[289473]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:55:53 localhost ceph-mon[289473]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:55:53 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:53 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:55:54 localhost ceph-mon[289473]: Added label _no_schedule to host np0005541910.localdomain Dec 2 04:55:54 localhost ceph-mon[289473]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain Dec 2 04:55:54 localhost ceph-mon[289473]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:55:54 localhost ceph-mon[289473]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:55:54 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:54 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:54 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:55:55 localhost ceph-mon[289473]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:55:55 localhost ceph-mon[289473]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:55:55 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:55 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:55 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:55 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:55:56 localhost ceph-mon[289473]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:55:56 localhost ceph-mon[289473]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:55:56 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:56 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:56 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:56 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:55:57 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:55:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:55:57 localhost nova_compute[281854]: 2025-12-02 09:55:57.419 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:57 localhost systemd[1]: tmp-crun.TVHEyU.mount: Deactivated successfully. Dec 2 04:55:57 localhost podman[296078]: 2025-12-02 09:55:57.461790072 +0000 UTC m=+0.096814131 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:55:57 localhost podman[296078]: 2025-12-02 09:55:57.476243718 +0000 UTC m=+0.111267797 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true) Dec 2 04:55:57 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:55:57 localhost ceph-mon[289473]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:55:57 localhost ceph-mon[289473]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:55:57 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:55:57 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:57 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:55:58 localhost nova_compute[281854]: 2025-12-02 09:55:58.651 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:55:58 localhost ceph-mon[289473]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:55:58 localhost ceph-mon[289473]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:56:00 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:00 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch Dec 2 04:56:00 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch Dec 2 04:56:00 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"}]': finished Dec 2 04:56:00 localhost ceph-mon[289473]: Removed host np0005541910.localdomain Dec 2 04:56:00 localhost ceph-mon[289473]: executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.#012Traceback (most recent call last):#012 File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work#012 return f(*arg)#012 File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh#012 and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label#012 host = self._get_stored_name(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name#012 self.assert_host(host)#012 File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host#012 raise OrchestratorError('host %s does not exist' % host)#012orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist Dec 2 04:56:00 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:00 localhost ceph-mon[289473]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:56:01 localhost sshd[296418]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:56:01 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 2 04:56:01 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 2 04:56:01 localhost systemd-logind[757]: New session 67 of user tripleo-admin. Dec 2 04:56:01 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 2 04:56:01 localhost systemd[1]: Starting User Manager for UID 1003... Dec 2 04:56:01 localhost systemd[296422]: Queued start job for default target Main User Target. Dec 2 04:56:01 localhost systemd[296422]: Created slice User Application Slice. Dec 2 04:56:01 localhost systemd[296422]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:56:01 localhost systemd[296422]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:56:01 localhost systemd[296422]: Reached target Paths. Dec 2 04:56:01 localhost systemd[296422]: Reached target Timers. Dec 2 04:56:01 localhost systemd[296422]: Starting D-Bus User Message Bus Socket... Dec 2 04:56:01 localhost systemd[296422]: Starting Create User's Volatile Files and Directories... Dec 2 04:56:01 localhost systemd[296422]: Listening on D-Bus User Message Bus Socket. Dec 2 04:56:01 localhost systemd[296422]: Reached target Sockets. Dec 2 04:56:01 localhost systemd[296422]: Finished Create User's Volatile Files and Directories. Dec 2 04:56:01 localhost systemd[296422]: Reached target Basic System. Dec 2 04:56:01 localhost systemd[296422]: Reached target Main User Target. Dec 2 04:56:01 localhost systemd[296422]: Startup finished in 168ms. Dec 2 04:56:01 localhost systemd[1]: Started User Manager for UID 1003. Dec 2 04:56:01 localhost systemd[1]: Started Session 67 of User tripleo-admin. Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:01 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:02 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:02 localhost python3[296583]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.104/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 2 04:56:02 localhost nova_compute[281854]: 2025-12-02 09:56:02.421 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:02 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:02 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:56:03.041 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:56:03.041 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:56:03.042 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:03 localhost python3[296729]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.104/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:56:03 localhost nova_compute[281854]: 2025-12-02 09:56:03.652 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:56:03 localhost podman[296875]: 2025-12-02 09:56:03.864681135 +0000 UTC m=+0.096609397 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:56:03 localhost podman[296875]: 2025-12-02 09:56:03.871890567 +0000 UTC m=+0.103818819 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 04:56:03 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:56:03 localhost python3[296874]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.104 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 04:56:04 localhost openstack_network_exporter[242845]: ERROR 09:56:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:04 localhost openstack_network_exporter[242845]: ERROR 09:56:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:04 localhost openstack_network_exporter[242845]: ERROR 09:56:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:56:04 localhost openstack_network_exporter[242845]: ERROR 09:56:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:56:04 localhost openstack_network_exporter[242845]: Dec 2 04:56:04 localhost openstack_network_exporter[242845]: ERROR 09:56:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:56:04 localhost openstack_network_exporter[242845]: Dec 2 04:56:06 localhost podman[240799]: time="2025-12-02T09:56:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:56:06 localhost podman[240799]: @ - - [02/Dec/2025:09:56:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:56:06 localhost podman[240799]: @ - - [02/Dec/2025:09:56:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18714 "" "Go-http-client/1.1" Dec 2 04:56:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:56:06 localhost podman[296910]: 2025-12-02 09:56:06.445242728 +0000 UTC m=+0.081772029 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64) Dec 2 04:56:06 localhost podman[296910]: 2025-12-02 09:56:06.455645597 +0000 UTC m=+0.092174908 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container) Dec 2 04:56:06 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:56:07 localhost ceph-mon[289473]: mon.np0005541913@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:56:07 localhost nova_compute[281854]: 2025-12-02 09:56:07.423 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:56:07 localhost podman[296948]: 2025-12-02 09:56:07.634682275 +0000 UTC m=+0.084218204 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:56:07 localhost podman[296948]: 2025-12-02 09:56:07.642790632 +0000 UTC m=+0.092326581 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 04:56:07 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:56:08 localhost ceph-mon[289473]: Saving service mon spec with placement label:mon Dec 2 04:56:08 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:08 localhost ceph-mon[289473]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:08 localhost ceph-mon[289473]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:08 localhost nova_compute[281854]: 2025-12-02 09:56:08.654 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:09 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x56450d29a160 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0 Dec 2 04:56:09 localhost ceph-mon[289473]: mon.np0005541913@2(peon) e11 removed from monmap, suicide. Dec 2 04:56:09 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x564503730f20 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Dec 2 04:56:09 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x56450d29a000 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Dec 2 04:56:09 localhost podman[296986]: 2025-12-02 09:56:09.959415822 +0000 UTC m=+0.061063685 container died 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, release=1763362218, io.openshift.expose-services=, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 2 04:56:09 localhost systemd[1]: var-lib-containers-storage-overlay-393e1e54e92e7ce105bdb9ae967dcd71a5af0f60b460340c2a56d8deb0a84a42-merged.mount: Deactivated successfully. Dec 2 04:56:10 localhost podman[296986]: 2025-12-02 09:56:10.002336932 +0000 UTC m=+0.103984765 container remove 36af0ed2ef00d05ae4aad6f924c8b496242c4f2361918e4bd57717905928e70b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:10 localhost systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541913.service: Deactivated successfully. Dec 2 04:56:10 localhost systemd[1]: Stopped Ceph mon.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:56:10 localhost systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541913.service: Consumed 7.908s CPU time. Dec 2 04:56:11 localhost systemd[1]: Reloading. Dec 2 04:56:11 localhost systemd-sysv-generator[297146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:56:11 localhost systemd-rc-local-generator[297143]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:12 localhost systemd[1]: tmp-crun.eZLR8U.mount: Deactivated successfully. Dec 2 04:56:12 localhost podman[297266]: 2025-12-02 09:56:12.371795384 +0000 UTC m=+0.096802411 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:56:12 localhost nova_compute[281854]: 2025-12-02 09:56:12.426 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:12 localhost podman[297266]: 2025-12-02 09:56:12.473919907 +0000 UTC m=+0.198926874 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, architecture=x86_64, distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:56:13 localhost nova_compute[281854]: 2025-12-02 09:56:13.658 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:56:14 localhost podman[297454]: 2025-12-02 09:56:14.007800442 +0000 UTC m=+0.081901183 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 04:56:14 localhost podman[297454]: 2025-12-02 09:56:14.021987971 +0000 UTC m=+0.096088712 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:56:14 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:56:15 localhost podman[297794]: 2025-12-02 09:56:15.594514071 +0000 UTC m=+0.083432994 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:56:15 localhost podman[297794]: 2025-12-02 09:56:15.60946186 +0000 UTC m=+0.098380743 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:56:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:56:15 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:56:15 localhost podman[297815]: 2025-12-02 09:56:15.686820641 +0000 UTC m=+0.068974617 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 2 04:56:15 localhost podman[297815]: 2025-12-02 09:56:15.724108128 +0000 UTC m=+0.106262114 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 04:56:15 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.104 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.105 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.110 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ff6ddbda-d180-4d50-90f7-76610ef11666', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.105835', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '2498a68e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '75dfc9337d093e5e540339d21b7fe077a6faefad276a00236c23b7f068a02f76'}]}, 'timestamp': '2025-12-02 09:56:16.111980', '_unique_id': '239212fa2ef04bbbbc37b733c15c71aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.115 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.115 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.115 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.116 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df4ca96e-b533-4dd7-b15c-d16fca21659c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.116062', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24995f98-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': 'dabe46f08dbb3d3977c8c4a82e1d8c2f5b72876b0eb3adb8dbbe7c5b4474156c'}]}, 'timestamp': '2025-12-02 09:56:16.116562', '_unique_id': '8b5bd022952d4e7aaebe79a7e4990d7d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.118 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.119 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c33eb1e5-2241-4562-b4a6-7870a8fdfd7e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.119071', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '2499da7c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '7e35bc6683109ff7237d7eff8695b48be71b4c593a732846248e257744ad84ef'}]}, 'timestamp': '2025-12-02 09:56:16.119785', '_unique_id': '1e30485ad9864879a99d8c31cdc1a068'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.122 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.141 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a68cb9f-745b-41b9-91ef-6ff07384f202', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:56:16.122244', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '249d4112-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.359916731, 'message_signature': '47a987eb7fd220ab3dde9b6cb8b711c2176adfd73f0c23d34e14cffd81edbd81'}]}, 'timestamp': '2025-12-02 09:56:16.142134', '_unique_id': '1c2464c7da5f41939c1536e4a02a259d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.143 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.145 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.145 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'da4ffd42-5cd3-4f91-a944-b5cfaa5fa0e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.145672', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '249de7ca-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '1e023951a1d0a8d7a7576ec703e10ea34b27bd0f125fbafea11c0429d4246f60'}]}, 'timestamp': '2025-12-02 09:56:16.146274', '_unique_id': 'b3ceb74d08754ab782c285369c12e1d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.147 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.148 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '78104aa0-d4e9-4333-aae9-6554e6316e26', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.148794', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '249ff7a4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': '41db1114f02c7cf4e35cc664b72057ec5c91549b70fa942a5f47449d154539a6'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.148794', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a00dac-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': 'd790dc85fc7a3933e875ef976d521faf99375f22d5da727dce99e1ec27e8c04a'}]}, 'timestamp': '2025-12-02 09:56:16.160320', '_unique_id': 'b23344de8e7a436da317c4a28fc283d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.162 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.189 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fbfa26ae-36a5-49fc-89ba-73b733bc7a19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.162872', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a48422-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': 'd9799851c83fcb90293cd3f6f8a8af3d51b41248d4f97f70f8a0f24d4c23b815'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.162872', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a49ec6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': 'c212d9a5164c936a47710f9e0488262f6dbbfafb0190a896e8b9fccef2127856'}]}, 'timestamp': '2025-12-02 09:56:16.190258', '_unique_id': '066cfc363dbe4e8a80b624afddb78343'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3398ed49-71bd-477a-aaca-6842ea5fe52c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.193519', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24a5337c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '7806b2265bad9f9ac63c72e052993f37efacd6d19d4aafaf2952f08f9800656c'}]}, 'timestamp': '2025-12-02 09:56:16.194160', '_unique_id': 'cd78ad54ef964f32a012556a7a055108'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 13970000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c109c1fc-0834-4659-bc90-854224f16616', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13970000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:56:16.196698', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '24a5ad34-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.359916731, 'message_signature': '117c39cb934da7a128558b2e14be14730e79988804718058e16c2ab094083f18'}]}, 'timestamp': '2025-12-02 09:56:16.197175', '_unique_id': '6c154b499a3045bcbf1582aadce042e8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62c2f4e0-cc67-4fd5-b1d0-5dd9bf6f87b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.199495', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a62106-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': '175d2048e1df649d7bb0317f8fb3f460fa6080d25ab009880fa0277473f4778b'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.199495', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a6339e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': 'b50f06ef406c5b5ea7a7753ba438f41888edaeb81c5b9f0048ca35c33f77b05c'}]}, 'timestamp': '2025-12-02 09:56:16.200601', '_unique_id': '5102ba27df224e0faa72b296e777d68b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c50ff6b1-ea7f-4959-bb44-454510e753c1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.203211', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a6ab76-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '6399161b062cdfb77f97a41b8f07dce26680c3a66a80e0686820535cebf99555'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.203211', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a6befe-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '21f297abb24677ac7ec9a97923aea32633a45e414b3a2cf2aa3c16b885fab445'}]}, 'timestamp': '2025-12-02 09:56:16.204176', '_unique_id': 'bb55e25a51b14413a4413f0843071b4b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49dcb828-b3d6-4d8c-83fe-a376c010713a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.206513', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24a72e66-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '2bf9ffd6db95a997353190697b61c77e0df931d42b09182be76e9523d1c9f151'}]}, 'timestamp': '2025-12-02 09:56:16.207041', '_unique_id': 'd6a0cbb362614a9e980fe71ccbcc17cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.208 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '382d2207-d63b-439d-9097-654daa0ae028', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.209377', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a79d6a-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': 'a3957cdc3ced3074cde3054f01212f36ec954b9e80973198329448d2cdac3c8e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.209377', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a7b354-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '0abd72a7667db11e7c58a11b91eaad9adb8c7ebba025baa42e105112fe15907c'}]}, 'timestamp': '2025-12-02 09:56:16.210425', '_unique_id': 'd151a8aa90fb48b7a9ee662e122d2db0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.211 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ede161ac-becf-4d5b-96a4-2b0882bc252f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.213517', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24a84238-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': 'e3b4d0e8cf08bf26e33fa6db4e62d1a5934533028c72301af89500bb1ba5827b'}]}, 'timestamp': '2025-12-02 09:56:16.214180', '_unique_id': 'f8fc11d8cf8f47539d08bcbbc73de395'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.217 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b07e3e8-8a8b-4f7a-92b8-62545acb604e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.217345', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24a8d81a-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': 'c54593617e1c8f88bb8aa6c37c5c15be9c923ecfece4735a6bfac741bd43b0e7'}]}, 'timestamp': '2025-12-02 09:56:16.218034', '_unique_id': '26aa26df08a74d1795022acd3a54c6f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.221 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.221 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bfab355b-d56d-4ab8-a002-93efd4858345', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.221075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a967b2-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '0ac92561ad86dacc4c38d9b4700dba7c2de69b85b9739c8758ffa32814e342d2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.221075', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24a98166-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '8f0502602d59c266f7955922037ac2c38f0bb82c9082ed8cb7ccaa97e3ee9d2c'}]}, 'timestamp': '2025-12-02 09:56:16.222330', '_unique_id': '1221f6e53a694d6cbc1c6428c13c2db6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.224 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.224 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.225 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b6c699ed-e0a6-433d-b9ba-8d3c77a0fb51', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.224920', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24a9f916-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': 'e954b2f34758c10bd0df3ccb6c87e44943a9339d524541127dfe6780dda24adf'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.224920', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24aa06a4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '4b2f7803d6272b98c57b917d995bc626839443a0f4709980e86a75e2813899f8'}]}, 'timestamp': '2025-12-02 09:56:16.225646', '_unique_id': 'b55b8b7d790e45f5b05dc2efb603515f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.227 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aebaa78a-2206-4f5d-9e07-77a187ef1d55', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.227341', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24aa5780-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': '35a3ce4f346656903e3e477451c7b9af2471c34989870df659f8aed7e257dbfd'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.227341', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24aa6a0e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.367885334, 'message_signature': '247f3a40caca0cfbb4945a5e1389718ae970d4bf80b76438d1a536a449d6c337'}]}, 'timestamp': '2025-12-02 09:56:16.228162', '_unique_id': 'dc9dd2bc50f74e7db0ac378fe87a1050'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.230 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.230 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.230 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.231 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe89c98c-2b92-4be5-9010-0a87bd849797', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:56:16.230824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '24aae02e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': 'b49bdbed57486dc37afc724ace815758d25b762e206c5f41878940d5fc2ee2ac'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:56:16.230824', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '24aaee48-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.381964141, 'message_signature': '20dd9fb5dc11fac72a8cb43819ea7e4c775daba2f3116b324f8b3c247b207741'}]}, 'timestamp': '2025-12-02 09:56:16.231555', '_unique_id': '63a5a856eee84ff798bbf693237b613f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.232 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.233 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.233 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a46ca6ec-2ffc-4d14-b6b4-f80f1d33d903', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.233573', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24ab4d52-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': '9596f2ed0220f5d12958e26c80a547d8c5157ef670dc387f1c5cbb42c5c30caa'}]}, 'timestamp': '2025-12-02 09:56:16.234022', '_unique_id': '96422d91a47a45b49e4f1c43842c5b9a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.234 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.235 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.235 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3cb40f12-92d5-407c-ad4d-0b2f61659463', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:56:16.235763', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '24aba194-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11538.324931424, 'message_signature': 'b74de5da6fefd954ae2ebdee7204bfac1fc7063b37253531e5f044b2e7a905fa'}]}, 'timestamp': '2025-12-02 09:56:16.236156', '_unique_id': '5540f3e4b2ad42cf9bc7af8316f1ff21'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:56:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:56:16.237 12 ERROR oslo_messaging.notify.messaging Dec 2 04:56:17 localhost nova_compute[281854]: 2025-12-02 09:56:17.429 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:18 localhost nova_compute[281854]: 2025-12-02 09:56:18.661 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:22 localhost nova_compute[281854]: 2025-12-02 09:56:22.432 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:23 localhost podman[297926]: Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.056569646 +0000 UTC m=+0.077531606 container create 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64) Dec 2 04:56:23 localhost systemd[1]: Started libpod-conmon-03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1.scope. Dec 2 04:56:23 localhost systemd[1]: Started libcrun container. Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.021374244 +0000 UTC m=+0.042336234 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.131982253 +0000 UTC m=+0.152944223 container init 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, io.buildah.version=1.41.4, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph) Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.143197933 +0000 UTC m=+0.164159863 container start 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.143374699 +0000 UTC m=+0.164336729 container attach 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 2 04:56:23 localhost happy_montalcini[297942]: 167 167 Dec 2 04:56:23 localhost systemd[1]: libpod-03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1.scope: Deactivated successfully. Dec 2 04:56:23 localhost podman[297926]: 2025-12-02 09:56:23.149288187 +0000 UTC m=+0.170250167 container died 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:56:23 localhost podman[297947]: 2025-12-02 09:56:23.368637416 +0000 UTC m=+0.211871030 container remove 03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_montalcini, description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, RELEASE=main) Dec 2 04:56:23 localhost systemd[1]: libpod-conmon-03264424b054028998cb34a7a2a526187fac923fd9b68fd02d20b07e0cf1c1f1.scope: Deactivated successfully. Dec 2 04:56:23 localhost nova_compute[281854]: 2025-12-02 09:56:23.664 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:23 localhost podman[298021]: Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.680157742 +0000 UTC m=+0.081203024 container create f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , RELEASE=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:23 localhost systemd[1]: Started libpod-conmon-f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558.scope. Dec 2 04:56:23 localhost systemd[1]: Started libcrun container. Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.645481184 +0000 UTC m=+0.046526516 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.750652288 +0000 UTC m=+0.151697580 container init f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, distribution-scope=public, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.760122302 +0000 UTC m=+0.161167594 container start f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, vcs-type=git, ceph=True) Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.760313697 +0000 UTC m=+0.161358979 container attach f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, ceph=True) Dec 2 04:56:23 localhost pensive_napier[298055]: 167 167 Dec 2 04:56:23 localhost systemd[1]: libpod-f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558.scope: Deactivated successfully. Dec 2 04:56:23 localhost podman[298021]: 2025-12-02 09:56:23.765156146 +0000 UTC m=+0.166201498 container died f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:24 localhost podman[298060]: 2025-12-02 09:56:24.030180998 +0000 UTC m=+0.253192686 container remove f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_napier, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, GIT_CLEAN=True, ceph=True, architecture=x86_64, io.openshift.expose-services=) Dec 2 04:56:24 localhost systemd[1]: libpod-conmon-f4ef60eb22415719a1c970c386f62f3626afe9c9f0046e50f65727dd53fbc558.scope: Deactivated successfully. Dec 2 04:56:24 localhost systemd[1]: tmp-crun.hyP7RN.mount: Deactivated successfully. Dec 2 04:56:24 localhost systemd[1]: var-lib-containers-storage-overlay-2d5ea9eba9b610c177964979d11c3c21eb2827b38812ec17a33d13c56a655531-merged.mount: Deactivated successfully. Dec 2 04:56:24 localhost podman[298091]: Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.142722679 +0000 UTC m=+0.078504171 container create d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7) Dec 2 04:56:24 localhost systemd[1]: Started libpod-conmon-d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77.scope. Dec 2 04:56:24 localhost systemd[1]: Started libcrun container. Dec 2 04:56:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c163f36118e059d7cbe6fb224a42b05d9f767faa6bde3af4732b16364ddc82/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c163f36118e059d7cbe6fb224a42b05d9f767faa6bde3af4732b16364ddc82/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c163f36118e059d7cbe6fb224a42b05d9f767faa6bde3af4732b16364ddc82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c163f36118e059d7cbe6fb224a42b05d9f767faa6bde3af4732b16364ddc82/merged/var/lib/ceph/mon/ceph-np0005541913 supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.205698025 +0000 UTC m=+0.141479517 container init d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.108383931 +0000 UTC m=+0.044165473 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.222273579 +0000 UTC m=+0.158055071 container start d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, version=7) Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.222565066 +0000 UTC m=+0.158346598 container attach d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_CLEAN=True, name=rhceph, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 2 04:56:24 localhost systemd[1]: libpod-d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77.scope: Deactivated successfully. Dec 2 04:56:24 localhost podman[298091]: 2025-12-02 09:56:24.321402661 +0000 UTC m=+0.257184203 container died d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64) Dec 2 04:56:24 localhost podman[298132]: 2025-12-02 09:56:24.475757371 +0000 UTC m=+0.143534681 container remove d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_hofstadter, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:24 localhost systemd[1]: libpod-conmon-d2fa5ff358638910ceab21ed19ba0a0f387a4d8652b339ea1aa3bc6a8029da77.scope: Deactivated successfully. Dec 2 04:56:24 localhost systemd[1]: Reloading. Dec 2 04:56:24 localhost systemd-rc-local-generator[298170]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:56:24 localhost systemd-sysv-generator[298173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:24 localhost systemd[1]: tmp-crun.Ktvj4e.mount: Deactivated successfully. Dec 2 04:56:24 localhost systemd[1]: var-lib-containers-storage-overlay-83c163f36118e059d7cbe6fb224a42b05d9f767faa6bde3af4732b16364ddc82-merged.mount: Deactivated successfully. Dec 2 04:56:24 localhost systemd[1]: Reloading. Dec 2 04:56:25 localhost systemd-rc-local-generator[298213]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 2 04:56:25 localhost systemd-sysv-generator[298216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 2 04:56:25 localhost systemd[1]: Starting Ceph mon.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074... Dec 2 04:56:25 localhost podman[298278]: Dec 2 04:56:25 localhost podman[298278]: 2025-12-02 09:56:25.595791692 +0000 UTC m=+0.083786533 container create f33ff84df5750a140229ed8a5fedf56b5e50bccee283e663eaa312fe015d114c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7) Dec 2 04:56:25 localhost systemd[1]: tmp-crun.1c4J9E.mount: Deactivated successfully. Dec 2 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43dfa7ef56d13eb080a91a7a700935be6bbfa9e6ab42fe13cf1e9a8a1b0d8d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43dfa7ef56d13eb080a91a7a700935be6bbfa9e6ab42fe13cf1e9a8a1b0d8d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43dfa7ef56d13eb080a91a7a700935be6bbfa9e6ab42fe13cf1e9a8a1b0d8d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43dfa7ef56d13eb080a91a7a700935be6bbfa9e6ab42fe13cf1e9a8a1b0d8d0/merged/var/lib/ceph/mon/ceph-np0005541913 supports timestamps until 2038 (0x7fffffff) Dec 2 04:56:25 localhost podman[298278]: 2025-12-02 09:56:25.556788058 +0000 UTC m=+0.044782929 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:25 localhost podman[298278]: 2025-12-02 09:56:25.661737526 +0000 UTC m=+0.149732367 container init f33ff84df5750a140229ed8a5fedf56b5e50bccee283e663eaa312fe015d114c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, architecture=x86_64, io.openshift.expose-services=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Dec 2 04:56:25 localhost podman[298278]: 2025-12-02 09:56:25.670858791 +0000 UTC m=+0.158853632 container start f33ff84df5750a140229ed8a5fedf56b5e50bccee283e663eaa312fe015d114c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541913, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 2 04:56:25 localhost bash[298278]: f33ff84df5750a140229ed8a5fedf56b5e50bccee283e663eaa312fe015d114c Dec 2 04:56:25 localhost systemd[1]: Started Ceph mon.np0005541913 for c7c8e171-a193-56fb-95fa-8879fcfa7074. Dec 2 04:56:25 localhost ceph-mon[298296]: set uid:gid to 167:167 (ceph:ceph) Dec 2 04:56:25 localhost ceph-mon[298296]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 2 04:56:25 localhost ceph-mon[298296]: pidfile_write: ignore empty --pid-file Dec 2 04:56:25 localhost ceph-mon[298296]: load: jerasure load: lrc Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: RocksDB version: 7.9.2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Git sha 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: DB SUMMARY Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: DB Session ID: 7NRXCK2K9UGWEPQBYWTV Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: CURRENT file: CURRENT Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: IDENTITY file: IDENTITY Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541913/store.db dir, Total Num: 0, files: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541913/store.db: 000004.log size: 761 ; Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.error_if_exists: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.create_if_missing: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.paranoid_checks: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.env: 0x5631822d99e0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.fs: PosixFileSystem Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.info_log: 0x563183c4ad20 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_file_opening_threads: 16 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.statistics: (nil) Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.use_fsync: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_log_file_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.log_file_time_to_roll: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.keep_log_file_num: 1000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.recycle_log_file_num: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_fallocate: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_mmap_reads: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_mmap_writes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.use_direct_reads: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.create_missing_column_families: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.db_log_dir: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.wal_dir: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.table_cache_numshardbits: 6 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.advise_random_on_open: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.db_write_buffer_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.write_buffer_manager: 0x563183c5b540 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.use_adaptive_mutex: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.rate_limiter: (nil) Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.wal_recovery_mode: 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enable_thread_tracking: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enable_pipelined_write: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.unordered_write: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.row_cache: None Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.wal_filter: None Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_ingest_behind: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.two_write_queues: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.manual_wal_flush: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.wal_compression: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.atomic_flush: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.persist_stats_to_disk: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.log_readahead_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.best_efforts_recovery: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.allow_data_in_errors: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.db_host_id: __hostname__ Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enforce_single_del_contracts: true Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_background_jobs: 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_background_compactions: -1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_subcompactions: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.delayed_write_rate : 16777216 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_total_wal_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.stats_dump_period_sec: 600 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.stats_persist_period_sec: 600 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_open_files: -1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bytes_per_sync: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_readahead_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_background_flushes: -1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Compression algorithms supported: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kZSTD supported: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kXpressCompression supported: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kBZip2Compression supported: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kLZ4Compression supported: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kZlibCompression supported: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: #011kSnappyCompression supported: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: DMutex implementation: pthread_mutex_t Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541913/store.db/MANIFEST-000005 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.merge_operator: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_filter: None Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_filter_factory: None Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.sst_partitioner_factory: None Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.memtable_factory: SkipListFactory Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.table_factory: BlockBasedTable Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563183c4a980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x563183c47350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.write_buffer_size: 33554432 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_write_buffer_number: 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression: NoCompression Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression: Disabled Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.prefix_extractor: nullptr Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.num_levels: 7 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.window_bits: -14 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.level: 32767 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.strategy: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.enabled: false Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.target_file_size_base: 67108864 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.target_file_size_multiplier: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.arena_block_size: 1048576 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.disable_auto_compactions: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.table_properties_collectors: Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.inplace_update_support: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.memtable_huge_page_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.bloom_locality: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.max_successive_merges: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.paranoid_file_checks: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.force_consistency_checks: 1 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.report_bg_io_stats: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.ttl: 2592000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enable_blob_files: false Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.min_blob_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_file_size: 268435456 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_compression_type: NoCompression Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.enable_blob_garbage_collection: false Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.blob_file_starting_level: 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541913/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2b5a5119-a77e-4ac2-8a7c-136bbfa56c89 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669385721329, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669385740724, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669385740874, "job": 1, "event": "recovery_finished"} Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563183c6ee00 Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: DB pointer 0x563183d64000 Dec 2 04:56:25 localhost ceph-mon[298296]: mon.np0005541913 does not exist in monmap, will attempt to join an existing cluster Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:56:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563183c47350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 04:56:25 localhost ceph-mon[298296]: using public_addr v2:172.18.0.104:0/0 -> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] Dec 2 04:56:25 localhost ceph-mon[298296]: starting mon.np0005541913 rank -1 at public addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] at bind addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541913 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:56:25 localhost ceph-mon[298296]: mon.np0005541913@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 Dec 2 04:56:25 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing) e11 sync_obtain_latest_monmap Dec 2 04:56:25 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing) e11 sync_obtain_latest_monmap obtained monmap e11 Dec 2 04:56:25 localhost podman[298339]: Dec 2 04:56:25 localhost podman[298339]: 2025-12-02 09:56:25.934531616 +0000 UTC m=+0.098669141 container create e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7) Dec 2 04:56:25 localhost systemd[1]: Started libpod-conmon-e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a.scope. Dec 2 04:56:25 localhost systemd[1]: Started libcrun container. Dec 2 04:56:25 localhost podman[298339]: 2025-12-02 09:56:25.891004081 +0000 UTC m=+0.055141646 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:26 localhost podman[298339]: 2025-12-02 09:56:26.004862118 +0000 UTC m=+0.168999643 container init e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7) Dec 2 04:56:26 localhost podman[298339]: 2025-12-02 09:56:26.012879743 +0000 UTC m=+0.177017268 container start e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, vcs-type=git) Dec 2 04:56:26 localhost eager_khayyam[298354]: 167 167 Dec 2 04:56:26 localhost podman[298339]: 2025-12-02 09:56:26.013322984 +0000 UTC m=+0.177460499 container attach e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main) Dec 2 04:56:26 localhost systemd[1]: libpod-e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a.scope: Deactivated successfully. Dec 2 04:56:26 localhost podman[298339]: 2025-12-02 09:56:26.017421644 +0000 UTC m=+0.181559179 container died e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=) Dec 2 04:56:26 localhost podman[298359]: 2025-12-02 09:56:26.086599425 +0000 UTC m=+0.067028325 container remove e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_khayyam, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, ceph=True, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, release=1763362218, GIT_BRANCH=main, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Dec 2 04:56:26 localhost systemd[1]: libpod-conmon-e97d30f2d7f3c7033dc7df5787d709b325554bb024b71ab06957ece71a4e791a.scope: Deactivated successfully. Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).mds e16 new map Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-02T08:05:53.424954+0000#012modified#0112025-12-02T09:52:13.505190+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01184#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26573}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26573 members: 26573#012[mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}] Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 crush map has features 3314933000852226048, adjusting msgr requires Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 crush map has features 288514051259236352, adjusting msgr requires Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring crash.np0005541911 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Deploying daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:26 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:56:26 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:26 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).paxosservice(auth 1..38) refresh upgraded, format 0 -> 3 Dec 2 04:56:26 localhost podman[298435]: Dec 2 04:56:26 localhost podman[298435]: 2025-12-02 09:56:26.953175693 +0000 UTC m=+0.042842717 container create f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 2 04:56:26 localhost systemd[1]: Started libpod-conmon-f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f.scope. Dec 2 04:56:26 localhost systemd[1]: Started libcrun container. Dec 2 04:56:27 localhost podman[298435]: 2025-12-02 09:56:27.008444392 +0000 UTC m=+0.098111416 container init f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z) Dec 2 04:56:27 localhost podman[298435]: 2025-12-02 09:56:27.019885978 +0000 UTC m=+0.109553002 container start f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=) Dec 2 04:56:27 localhost podman[298435]: 2025-12-02 09:56:27.020052053 +0000 UTC m=+0.109719097 container attach f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, release=1763362218, version=7, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Dec 2 04:56:27 localhost ecstatic_villani[298451]: 167 167 Dec 2 04:56:27 localhost systemd[1]: libpod-f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f.scope: Deactivated successfully. Dec 2 04:56:27 localhost podman[298435]: 2025-12-02 09:56:27.02257703 +0000 UTC m=+0.112244134 container died f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:56:27 localhost podman[298435]: 2025-12-02 09:56:26.937818553 +0000 UTC m=+0.027485587 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:27 localhost podman[298456]: 2025-12-02 09:56:27.120287205 +0000 UTC m=+0.085418346 container remove f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_villani, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 2 04:56:27 localhost systemd[1]: libpod-conmon-f68db0244436d329a512f1be275ccdbbf5652c4911780816afe0b3335a7fd21f.scope: Deactivated successfully. Dec 2 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay-7da958795fda4d9380b200b691c953f6a9c2a5bcf242ab83072f70fd14ac119b-merged.mount: Deactivated successfully. Dec 2 04:56:27 localhost nova_compute[281854]: 2025-12-02 09:56:27.435 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:56:27 localhost systemd[1]: tmp-crun.FLsrv8.mount: Deactivated successfully. Dec 2 04:56:27 localhost podman[298514]: 2025-12-02 09:56:27.648565751 +0000 UTC m=+0.115399628 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 04:56:27 localhost podman[298514]: 2025-12-02 09:56:27.654073339 +0000 UTC m=+0.120907116 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:56:27 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:56:27 localhost podman[298552]: Dec 2 04:56:27 localhost podman[298552]: 2025-12-02 09:56:27.983063342 +0000 UTC m=+0.069596054 container create 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.component=rhceph-container) Dec 2 04:56:28 localhost systemd[1]: Started libpod-conmon-93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4.scope. Dec 2 04:56:28 localhost systemd[1]: Started libcrun container. Dec 2 04:56:28 localhost podman[298552]: 2025-12-02 09:56:28.056931068 +0000 UTC m=+0.143463810 container init 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:56:28 localhost podman[298552]: 2025-12-02 09:56:27.960368445 +0000 UTC m=+0.046901227 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:28 localhost podman[298552]: 2025-12-02 09:56:28.067009308 +0000 UTC m=+0.153542020 container start 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:56:28 localhost podman[298552]: 2025-12-02 09:56:28.067137461 +0000 UTC m=+0.153670173 container attach 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:28 localhost sharp_antonelli[298567]: 167 167 Dec 2 04:56:28 localhost systemd[1]: libpod-93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4.scope: Deactivated successfully. Dec 2 04:56:28 localhost podman[298552]: 2025-12-02 09:56:28.070624375 +0000 UTC m=+0.157157087 container died 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, release=1763362218, distribution-scope=public) Dec 2 04:56:28 localhost podman[298572]: 2025-12-02 09:56:28.15115832 +0000 UTC m=+0.067301722 container remove 93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_antonelli, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218) Dec 2 04:56:28 localhost systemd[1]: libpod-conmon-93ecfd56657b85ea090788788980db4eb2d3ec6235b666da4cde40c880b028a4.scope: Deactivated successfully. Dec 2 04:56:28 localhost systemd[1]: var-lib-containers-storage-overlay-6818c6a7917354a45eac228f6c57997c8061997e84d16a26b19709d85f55258c-merged.mount: Deactivated successfully. Dec 2 04:56:28 localhost nova_compute[281854]: 2025-12-02 09:56:28.667 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:30 localhost podman[298642]: Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.21813781 +0000 UTC m=+0.077803473 container create b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, ceph=True, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Dec 2 04:56:30 localhost systemd[1]: Started libpod-conmon-b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800.scope. Dec 2 04:56:30 localhost systemd[1]: Started libcrun container. Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.186820973 +0000 UTC m=+0.046486696 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.293764814 +0000 UTC m=+0.153430487 container init b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, architecture=x86_64, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:30 localhost systemd[1]: tmp-crun.jOp00s.mount: Deactivated successfully. Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.309347352 +0000 UTC m=+0.169013025 container start b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.309595158 +0000 UTC m=+0.169260881 container attach b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:56:30 localhost modest_shannon[298658]: 167 167 Dec 2 04:56:30 localhost systemd[1]: libpod-b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800.scope: Deactivated successfully. Dec 2 04:56:30 localhost podman[298642]: 2025-12-02 09:56:30.31227742 +0000 UTC m=+0.171943133 container died b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, build-date=2025-11-26T19:44:28Z, architecture=x86_64, maintainer=Guillaume Abrioux , release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:56:30 localhost podman[298663]: 2025-12-02 09:56:30.41618809 +0000 UTC m=+0.089613849 container remove b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_shannon, distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Dec 2 04:56:30 localhost systemd[1]: libpod-conmon-b734fdc9962a63fba32dc35c07c738b65f06c65910d06ad39b3e6ece59602800.scope: Deactivated successfully. Dec 2 04:56:31 localhost systemd[1]: var-lib-containers-storage-overlay-7e82fd8df3763b742fe704874094d01c53cf40aac8ba49e63e74183dfb563878-merged.mount: Deactivated successfully. Dec 2 04:56:32 localhost nova_compute[281854]: 2025-12-02 09:56:32.438 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:56:32 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:56:33 localhost nova_compute[281854]: 2025-12-02 09:56:33.669 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:34 localhost openstack_network_exporter[242845]: ERROR 09:56:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:56:34 localhost openstack_network_exporter[242845]: ERROR 09:56:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:34 localhost openstack_network_exporter[242845]: ERROR 09:56:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:56:34 localhost openstack_network_exporter[242845]: ERROR 09:56:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:56:34 localhost openstack_network_exporter[242845]: Dec 2 04:56:34 localhost openstack_network_exporter[242845]: ERROR 09:56:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:56:34 localhost openstack_network_exporter[242845]: Dec 2 04:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:56:34 localhost podman[298680]: 2025-12-02 09:56:34.451524401 +0000 UTC m=+0.078715317 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 04:56:34 localhost podman[298680]: 2025-12-02 09:56:34.456298179 +0000 UTC m=+0.083489035 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:34 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:56:36 localhost podman[298806]: 2025-12-02 09:56:36.023678189 +0000 UTC m=+0.069088159 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux ) Dec 2 04:56:36 localhost podman[240799]: time="2025-12-02T09:56:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:56:36 localhost podman[240799]: @ - - [02/Dec/2025:09:56:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:56:36 localhost podman[298806]: 2025-12-02 09:56:36.14441197 +0000 UTC m=+0.189821950 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 2 04:56:36 localhost podman[240799]: @ - - [02/Dec/2025:09:56:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18721 "" "Go-http-client/1.1" Dec 2 04:56:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:56:36 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:36 localhost podman[298912]: 2025-12-02 09:56:36.616943365 +0000 UTC m=+0.095541798 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7) Dec 2 04:56:36 localhost podman[298912]: 2025-12-02 09:56:36.653709278 +0000 UTC m=+0.132307721 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 2 04:56:36 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:56:37 localhost nova_compute[281854]: 2025-12-02 09:56:37.441 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:56:38 localhost systemd[1]: tmp-crun.gARKAM.mount: Deactivated successfully. Dec 2 04:56:38 localhost podman[298964]: 2025-12-02 09:56:38.491109644 +0000 UTC m=+0.095900317 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:56:38 localhost podman[298964]: 2025-12-02 09:56:38.501960594 +0000 UTC m=+0.106751267 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:56:38 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:56:38 localhost nova_compute[281854]: 2025-12-02 09:56:38.672 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: Reconfig service osd.default_drive_group Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:40 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:56:40 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:56:41 localhost podman[299060]: Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.671792856 +0000 UTC m=+0.085111219 container create b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=1763362218, io.buildah.version=1.41.4) Dec 2 04:56:41 localhost systemd[1]: Started libpod-conmon-b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1.scope. Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.639257625 +0000 UTC m=+0.052576028 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:41 localhost systemd[1]: Started libcrun container. Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.755989619 +0000 UTC m=+0.169307962 container init b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=) Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.765247557 +0000 UTC m=+0.178565890 container start b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.76539565 +0000 UTC m=+0.178714033 container attach b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:56:41 localhost inspiring_buck[299075]: 167 167 Dec 2 04:56:41 localhost systemd[1]: libpod-b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1.scope: Deactivated successfully. Dec 2 04:56:41 localhost podman[299060]: 2025-12-02 09:56:41.772007227 +0000 UTC m=+0.185325590 container died b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Dec 2 04:56:41 localhost podman[299080]: 2025-12-02 09:56:41.863379452 +0000 UTC m=+0.082657123 container remove b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=inspiring_buck, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True) Dec 2 04:56:41 localhost systemd[1]: libpod-conmon-b27019543f618016732fbac028f11b0589f39145b32832cf018fbf444f8230a1.scope: Deactivated successfully. Dec 2 04:56:42 localhost nova_compute[281854]: 2025-12-02 09:56:42.443 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:56:42 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:42 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:56:42 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:56:42 localhost systemd[1]: var-lib-containers-storage-overlay-5a6d3d54b0017b5f30a92e0e47dec362fde2b2f5976153eadfc00715b22193bc-merged.mount: Deactivated successfully. Dec 2 04:56:42 localhost podman[299158]: Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.773394593 +0000 UTC m=+0.084013149 container create 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, RELEASE=main, name=rhceph, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:56:42 localhost systemd[1]: Started libpod-conmon-6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449.scope. Dec 2 04:56:42 localhost systemd[1]: Started libcrun container. Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.838587668 +0000 UTC m=+0.149206244 container init 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, release=1763362218, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.742089406 +0000 UTC m=+0.052708012 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.858795988 +0000 UTC m=+0.169414574 container start 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, architecture=x86_64, version=7, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Dec 2 04:56:42 localhost awesome_poincare[299173]: 167 167 Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.859073575 +0000 UTC m=+0.169692151 container attach 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, name=rhceph) Dec 2 04:56:42 localhost systemd[1]: libpod-6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449.scope: Deactivated successfully. Dec 2 04:56:42 localhost podman[299158]: 2025-12-02 09:56:42.86186189 +0000 UTC m=+0.172480496 container died 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z) Dec 2 04:56:42 localhost podman[299178]: 2025-12-02 09:56:42.957920141 +0000 UTC m=+0.083782113 container remove 6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_poincare, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph) Dec 2 04:56:42 localhost systemd[1]: libpod-conmon-6677086a3cc75f63e6dfa3973eba378344904ca36077d8c68d609f0bcc6d0449.scope: Deactivated successfully. Dec 2 04:56:43 localhost systemd[1]: session-66.scope: Deactivated successfully. Dec 2 04:56:43 localhost systemd[1]: session-66.scope: Consumed 27.362s CPU time. Dec 2 04:56:43 localhost systemd-logind[757]: Session 66 logged out. Waiting for processes to exit. Dec 2 04:56:43 localhost systemd-logind[757]: Removed session 66. Dec 2 04:56:43 localhost systemd[1]: tmp-crun.AgdNw7.mount: Deactivated successfully. Dec 2 04:56:43 localhost systemd[1]: var-lib-containers-storage-overlay-07f90f91593a877f09f0a53c131e2e823965549b94f5ff30b693931b57a2d15c-merged.mount: Deactivated successfully. Dec 2 04:56:43 localhost nova_compute[281854]: 2025-12-02 09:56:43.706 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:56:44 localhost podman[299201]: 2025-12-02 09:56:44.461109994 +0000 UTC m=+0.095837465 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:56:44 localhost podman[299201]: 2025-12-02 09:56:44.47139148 +0000 UTC m=+0.106118941 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:56:44 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:56:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:56:46 localhost podman[299221]: 2025-12-02 09:56:46.434755426 +0000 UTC m=+0.076538450 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:56:46 localhost podman[299222]: 2025-12-02 09:56:46.488467803 +0000 UTC m=+0.126887666 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 2 04:56:46 localhost podman[299221]: 2025-12-02 09:56:46.523318835 +0000 UTC m=+0.165101849 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:56:46 localhost nova_compute[281854]: 2025-12-02 09:56:46.526 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:46 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:56:46 localhost podman[299222]: 2025-12-02 09:56:46.576987822 +0000 UTC m=+0.215407735 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Dec 2 04:56:46 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:56:46 localhost nova_compute[281854]: 2025-12-02 09:56:46.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:46 localhost nova_compute[281854]: 2025-12-02 09:56:46.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:56:46 localhost nova_compute[281854]: 2025-12-02 09:56:46.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:56:47 localhost nova_compute[281854]: 2025-12-02 09:56:47.446 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:47 localhost nova_compute[281854]: 2025-12-02 09:56:47.657 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:56:47 localhost nova_compute[281854]: 2025-12-02 09:56:47.658 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:56:47 localhost nova_compute[281854]: 2025-12-02 09:56:47.658 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:56:47 localhost nova_compute[281854]: 2025-12-02 09:56:47.658 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:56:48 localhost nova_compute[281854]: 2025-12-02 09:56:48.710 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:48 localhost nova_compute[281854]: 2025-12-02 09:56:48.949 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.200 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.201 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.202 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.202 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.203 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:49 localhost nova_compute[281854]: 2025-12-02 09:56:49.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.113 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.114 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.114 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.115 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.115 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:56:50 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.581 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:56:50 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 2 04:56:50 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e87 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 2 04:56:50 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e88 e88: 6 total, 6 up, 6 in Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:56:50 localhost ceph-mon[298296]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' Dec 2 04:56:50 localhost ceph-mon[298296]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:56:50 localhost ceph-mon[298296]: from='client.? 172.18.0.200:0/219576174' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:56:50 localhost ceph-mon[298296]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:56:50 localhost ceph-mon[298296]: Activating manager daemon np0005541910.kzipdo Dec 2 04:56:50 localhost ceph-mon[298296]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.687 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.688 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.919 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.921 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11707MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.922 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:56:50 localhost nova_compute[281854]: 2025-12-02 09:56:50.922 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.265 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.266 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.266 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.307 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:56:51 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.748 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:56:51 localhost nova_compute[281854]: 2025-12-02 09:56:51.754 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:56:52 localhost nova_compute[281854]: 2025-12-02 09:56:52.490 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:52 localhost nova_compute[281854]: 2025-12-02 09:56:52.624 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:56:52 localhost nova_compute[281854]: 2025-12-02 09:56:52.627 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:56:52 localhost nova_compute[281854]: 2025-12-02 09:56:52.628 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:56:53 localhost systemd[1]: Stopping User Manager for UID 1002... Dec 2 04:56:53 localhost systemd[25916]: Activating special unit Exit the Session... Dec 2 04:56:53 localhost systemd[25916]: Removed slice User Background Tasks Slice. Dec 2 04:56:53 localhost systemd[25916]: Stopped target Main User Target. Dec 2 04:56:53 localhost systemd[25916]: Stopped target Basic System. Dec 2 04:56:53 localhost systemd[25916]: Stopped target Paths. Dec 2 04:56:53 localhost systemd[25916]: Stopped target Sockets. Dec 2 04:56:53 localhost systemd[25916]: Stopped target Timers. Dec 2 04:56:53 localhost systemd[25916]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:56:53 localhost systemd[25916]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:56:53 localhost systemd[25916]: Closed D-Bus User Message Bus Socket. Dec 2 04:56:53 localhost systemd[25916]: Stopped Create User's Volatile Files and Directories. Dec 2 04:56:53 localhost systemd[25916]: Removed slice User Application Slice. Dec 2 04:56:53 localhost systemd[25916]: Reached target Shutdown. Dec 2 04:56:53 localhost systemd[25916]: Finished Exit the Session. Dec 2 04:56:53 localhost systemd[25916]: Reached target Exit the Session. Dec 2 04:56:53 localhost systemd[1]: user@1002.service: Deactivated successfully. Dec 2 04:56:53 localhost systemd[1]: Stopped User Manager for UID 1002. Dec 2 04:56:53 localhost systemd[1]: user@1002.service: Consumed 14.194s CPU time, read 0B from disk, written 7.0K to disk. Dec 2 04:56:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Dec 2 04:56:53 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Dec 2 04:56:53 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Dec 2 04:56:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Dec 2 04:56:53 localhost systemd[1]: Removed slice User Slice of UID 1002. Dec 2 04:56:53 localhost systemd[1]: user-1002.slice: Consumed 4min 30.785s CPU time. Dec 2 04:56:53 localhost nova_compute[281854]: 2025-12-02 09:56:53.748 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:53 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:56:57 localhost nova_compute[281854]: 2025-12-02 09:56:57.493 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:56:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:56:58 localhost podman[299317]: 2025-12-02 09:56:58.450745498 +0000 UTC m=+0.086425413 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 04:56:58 localhost podman[299317]: 2025-12-02 09:56:58.46613041 +0000 UTC m=+0.101810325 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:56:58 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:56:58 localhost nova_compute[281854]: 2025-12-02 09:56:58.751 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:02 localhost nova_compute[281854]: 2025-12-02 09:57:02.520 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:57:03.042 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:57:03.042 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:57:03.047 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:03 localhost nova_compute[281854]: 2025-12-02 09:57:03.781 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:04 localhost openstack_network_exporter[242845]: ERROR 09:57:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:57:04 localhost openstack_network_exporter[242845]: ERROR 09:57:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:04 localhost openstack_network_exporter[242845]: ERROR 09:57:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:04 localhost openstack_network_exporter[242845]: ERROR 09:57:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:57:04 localhost openstack_network_exporter[242845]: Dec 2 04:57:04 localhost openstack_network_exporter[242845]: ERROR 09:57:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:57:04 localhost openstack_network_exporter[242845]: Dec 2 04:57:04 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:57:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:57:05 localhost podman[299337]: 2025-12-02 09:57:05.445020077 +0000 UTC m=+0.086922817 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible) Dec 2 04:57:05 localhost podman[299337]: 2025-12-02 09:57:05.478926684 +0000 UTC m=+0.120829354 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:57:05 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:57:05 localhost systemd[1]: session-67.scope: Deactivated successfully. Dec 2 04:57:05 localhost systemd[1]: session-67.scope: Consumed 1.682s CPU time. Dec 2 04:57:05 localhost systemd-logind[757]: Session 67 logged out. Waiting for processes to exit. Dec 2 04:57:05 localhost systemd-logind[757]: Removed session 67. Dec 2 04:57:06 localhost podman[240799]: time="2025-12-02T09:57:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:57:06 localhost podman[240799]: @ - - [02/Dec/2025:09:57:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:57:06 localhost podman[240799]: @ - - [02/Dec/2025:09:57:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18712 "" "Go-http-client/1.1" Dec 2 04:57:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:57:07 localhost podman[299355]: 2025-12-02 09:57:07.44824962 +0000 UTC m=+0.082919450 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:57:07 localhost podman[299355]: 2025-12-02 09:57:07.490125041 +0000 UTC m=+0.124794841 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public) Dec 2 04:57:07 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:57:07 localhost nova_compute[281854]: 2025-12-02 09:57:07.523 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:08 localhost nova_compute[281854]: 2025-12-02 09:57:08.784 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:57:09 localhost podman[299373]: 2025-12-02 09:57:09.149407821 +0000 UTC m=+0.078378288 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:57:09 localhost podman[299373]: 2025-12-02 09:57:09.186103223 +0000 UTC m=+0.115073700 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:57:09 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:57:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 4864 writes, 22K keys, 4864 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4864 writes, 600 syncs, 8.11 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 88 writes, 289 keys, 88 commit groups, 1.0 writes per commit group, ingest: 0.31 MB, 0.00 MB/s#012Interval WAL: 88 writes, 31 syncs, 2.84 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.703411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430703543, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11592, "num_deletes": 273, "total_data_size": 19469679, "memory_usage": 20179728, "flush_reason": "Manual Compaction"} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430827800, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 16103374, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11597, "table_properties": {"data_size": 16038466, "index_size": 36066, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 295547, "raw_average_key_size": 26, "raw_value_size": 15847482, "raw_average_value_size": 1428, "num_data_blocks": 1382, "num_entries": 11095, "num_filter_entries": 11095, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 1764669385, "file_creation_time": 1764669430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 124458 microseconds, and 34571 cpu microseconds. Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.827878) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 16103374 bytes OK Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.827910) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.829225) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.829261) EVENT_LOG_v1 {"time_micros": 1764669430829250, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.829285) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19388766, prev total WAL file size 19388766, number of live WAL files 2. Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.832922) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(15MB) 8(1887B)] Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430833029, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 16105261, "oldest_snapshot_seqno": -1} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10846 keys, 16100143 bytes, temperature: kUnknown Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430917423, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 16100143, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16035892, "index_size": 36054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 290805, "raw_average_key_size": 26, "raw_value_size": 15848133, "raw_average_value_size": 1461, "num_data_blocks": 1381, "num_entries": 10846, "num_filter_entries": 10846, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669430, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.917742) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 16100143 bytes Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.919287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.7 rd, 190.6 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(15.4, 0.0 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11100, records dropped: 254 output_compression: NoCompression Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.919311) EVENT_LOG_v1 {"time_micros": 1764669430919301, "job": 4, "event": "compaction_finished", "compaction_time_micros": 84474, "compaction_time_cpu_micros": 22649, "output_level": 6, "num_output_files": 1, "total_output_size": 16100143, "num_input_records": 11100, "num_output_records": 10846, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430921107, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669430921151, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 2 04:57:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:57:10.832796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:57:12 localhost nova_compute[281854]: 2025-12-02 09:57:12.526 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:13 localhost nova_compute[281854]: 2025-12-02 09:57:13.786 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr handle_mgr_map Activating! Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr handle_mgr_map I am now activating Dec 2 04:57:14 localhost ceph-mgr[288059]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: balancer Dec 2 04:57:14 localhost ceph-mgr[288059]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: [balancer INFO root] Starting Dec 2 04:57:14 localhost ceph-mgr[288059]: [balancer INFO root] Optimize plan auto_2025-12-02_09:57:14 Dec 2 04:57:14 localhost ceph-mgr[288059]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 04:57:14 localhost ceph-mgr[288059]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 2 04:57:14 localhost ceph-mgr[288059]: [cephadm WARNING root] removing stray HostCache host record np0005541910.localdomain.devices.0 Dec 2 04:57:14 localhost ceph-mgr[288059]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005541910.localdomain.devices.0 Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: cephadm Dec 2 04:57:14 localhost ceph-mgr[288059]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: crash Dec 2 04:57:14 localhost ceph-mgr[288059]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: devicehealth Dec 2 04:57:14 localhost ceph-mgr[288059]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: iostat Dec 2 04:57:14 localhost ceph-mgr[288059]: [devicehealth INFO root] Starting Dec 2 04:57:14 localhost ceph-mgr[288059]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: nfs Dec 2 04:57:14 localhost ceph-mgr[288059]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:14 localhost ceph-mgr[288059]: mgr load Constructed class from module: orchestrator Dec 2 04:57:15 localhost ceph-mgr[288059]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: pg_autoscaler Dec 2 04:57:15 localhost ceph-mgr[288059]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: progress Dec 2 04:57:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: [progress INFO root] Loading... Dec 2 04:57:15 localhost ceph-mgr[288059]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 2 04:57:15 localhost ceph-mgr[288059]: [progress INFO root] Loaded OSDMap, ready. Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] recovery thread starting Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] starting setup Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: rbd_support Dec 2 04:57:15 localhost ceph-mgr[288059]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: restful Dec 2 04:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 04:57:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 5937 writes, 25K keys, 5937 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5937 writes, 864 syncs, 6.87 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 215 writes, 593 keys, 215 commit groups, 1.0 writes per commit group, ingest: 0.70 MB, 0.00 MB/s#012Interval WAL: 215 writes, 84 syncs, 2.56 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 04:57:15 localhost ceph-mgr[288059]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: status Dec 2 04:57:15 localhost ceph-mgr[288059]: [restful INFO root] server_addr: :: server_port: 8003 Dec 2 04:57:15 localhost ceph-mgr[288059]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: telemetry Dec 2 04:57:15 localhost ceph-mgr[288059]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 2 04:57:15 localhost ceph-mgr[288059]: [restful WARNING root] server not running: no certificate configured Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] PerfHandler: starting Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 04:57:15 localhost ceph-mgr[288059]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 2 04:57:15 localhost ceph-mgr[288059]: mgr load Constructed class from module: volumes Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_task_task: images, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.092+0000 7f780f0b6640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.092+0000 7f780f0b6640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.092+0000 7f780f0b6640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.092+0000 7f780f0b6640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.092+0000 7f780f0b6640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] TaskHandler: starting Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 2 04:57:15 localhost ceph-mgr[288059]: [rbd_support INFO root] setup complete Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.116+0000 7f780b8af640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-mgr[288059]: client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.116+0000 7f780b8af640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.116+0000 7f780b8af640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.116+0000 7f780b8af640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:15.116+0000 7f780b8af640 -1 client.0 error registering admin socket command: (17) File exists Dec 2 04:57:15 localhost sshd[299538]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:57:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:57:15 localhost systemd-logind[757]: New session 69 of user ceph-admin. Dec 2 04:57:15 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 2 04:57:15 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 2 04:57:15 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 2 04:57:15 localhost podman[299540]: 2025-12-02 09:57:15.360846849 +0000 UTC m=+0.087297377 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd) Dec 2 04:57:15 localhost systemd[1]: Starting User Manager for UID 1002... Dec 2 04:57:15 localhost podman[299540]: 2025-12-02 09:57:15.393811021 +0000 UTC m=+0.120261489 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:57:15 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:57:15 localhost systemd[299560]: Queued start job for default target Main User Target. Dec 2 04:57:15 localhost systemd[299560]: Created slice User Application Slice. Dec 2 04:57:15 localhost systemd[299560]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 04:57:15 localhost systemd[299560]: Started Daily Cleanup of User's Temporary Directories. Dec 2 04:57:15 localhost systemd[299560]: Reached target Paths. Dec 2 04:57:15 localhost systemd[299560]: Reached target Timers. Dec 2 04:57:15 localhost systemd[299560]: Starting D-Bus User Message Bus Socket... Dec 2 04:57:15 localhost systemd[299560]: Starting Create User's Volatile Files and Directories... Dec 2 04:57:15 localhost systemd[299560]: Finished Create User's Volatile Files and Directories. Dec 2 04:57:15 localhost systemd[299560]: Listening on D-Bus User Message Bus Socket. Dec 2 04:57:15 localhost systemd[299560]: Reached target Sockets. Dec 2 04:57:15 localhost systemd[299560]: Reached target Basic System. Dec 2 04:57:15 localhost systemd[299560]: Reached target Main User Target. Dec 2 04:57:15 localhost systemd[299560]: Startup finished in 140ms. Dec 2 04:57:15 localhost systemd[1]: Started User Manager for UID 1002. Dec 2 04:57:15 localhost systemd[1]: Started Session 69 of User ceph-admin. Dec 2 04:57:15 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34369 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:57:15 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:16 localhost ceph-mgr[288059]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:57:16] ENGINE Bus STARTING Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:57:16] ENGINE Bus STARTING Dec 2 04:57:16 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 2 04:57:16 localhost systemd[296422]: Activating special unit Exit the Session... Dec 2 04:57:16 localhost systemd[296422]: Stopped target Main User Target. Dec 2 04:57:16 localhost systemd[296422]: Stopped target Basic System. Dec 2 04:57:16 localhost systemd[296422]: Stopped target Paths. Dec 2 04:57:16 localhost systemd[296422]: Stopped target Sockets. Dec 2 04:57:16 localhost systemd[296422]: Stopped target Timers. Dec 2 04:57:16 localhost systemd[296422]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 04:57:16 localhost systemd[296422]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 04:57:16 localhost systemd[296422]: Closed D-Bus User Message Bus Socket. Dec 2 04:57:16 localhost systemd[296422]: Stopped Create User's Volatile Files and Directories. Dec 2 04:57:16 localhost systemd[296422]: Removed slice User Application Slice. Dec 2 04:57:16 localhost systemd[296422]: Reached target Shutdown. Dec 2 04:57:16 localhost systemd[296422]: Finished Exit the Session. Dec 2 04:57:16 localhost systemd[296422]: Reached target Exit the Session. Dec 2 04:57:16 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 2 04:57:16 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 2 04:57:16 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 2 04:57:16 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 2 04:57:16 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 2 04:57:16 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 2 04:57:16 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 2 04:57:16 localhost systemd[1]: user-1003.slice: Consumed 2.225s CPU time. Dec 2 04:57:16 localhost ceph-mgr[288059]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:57:16] ENGINE Serving on http://172.18.0.107:8765 Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:57:16] ENGINE Serving on http://172.18.0.107:8765 Dec 2 04:57:16 localhost ceph-mgr[288059]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:57:16] ENGINE Serving on https://172.18.0.107:7150 Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:57:16] ENGINE Serving on https://172.18.0.107:7150 Dec 2 04:57:16 localhost ceph-mgr[288059]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:57:16] ENGINE Bus STARTED Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:57:16] ENGINE Bus STARTED Dec 2 04:57:16 localhost ceph-mgr[288059]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:57:16] ENGINE Client ('172.18.0.107', 38106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:57:16] ENGINE Client ('172.18.0.107', 38106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:57:16 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:16 localhost podman[299708]: 2025-12-02 09:57:16.600705736 +0000 UTC m=+0.093947525 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, ceph=True, GIT_CLEAN=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:57:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:57:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:57:16 localhost systemd[1]: tmp-crun.83BfAD.mount: Deactivated successfully. Dec 2 04:57:16 localhost podman[299727]: 2025-12-02 09:57:16.700944499 +0000 UTC m=+0.080445994 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:57:16 localhost podman[299727]: 2025-12-02 09:57:16.833531356 +0000 UTC m=+0.213033041 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:57:16 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:57:16 localhost podman[299728]: 2025-12-02 09:57:16.856527932 +0000 UTC m=+0.236024947 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 04:57:16 localhost podman[299708]: 2025-12-02 09:57:16.886544605 +0000 UTC m=+0.379786364 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Dec 2 04:57:16 localhost podman[299728]: 2025-12-02 09:57:16.917108253 +0000 UTC m=+0.296605268 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 04:57:16 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:57:17 localhost nova_compute[281854]: 2025-12-02 09:57:17.528 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:17 localhost ceph-mgr[288059]: [devicehealth INFO root] Check health Dec 2 04:57:18 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34408 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 2 04:57:18 localhost ceph-mon[298296]: mon.np0005541913@-1(synchronizing).osd e89 e89: 6 total, 6 up, 6 in Dec 2 04:57:18 localhost ceph-mon[298296]: Activating manager daemon np0005541913.mfesdm Dec 2 04:57:18 localhost ceph-mon[298296]: Manager daemon np0005541910.kzipdo is unresponsive, replacing it with standby daemon np0005541913.mfesdm Dec 2 04:57:18 localhost ceph-mon[298296]: Manager daemon np0005541913.mfesdm is now available Dec 2 04:57:18 localhost ceph-mon[298296]: removing stray HostCache host record np0005541910.localdomain.devices.0 Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/mirror_snapshot_schedule"} : dispatch Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/trash_purge_schedule"} : dispatch Dec 2 04:57:18 localhost ceph-mon[298296]: [02/Dec/2025:09:57:16] ENGINE Bus STARTING Dec 2 04:57:18 localhost ceph-mon[298296]: [02/Dec/2025:09:57:16] ENGINE Serving on http://172.18.0.107:8765 Dec 2 04:57:18 localhost ceph-mon[298296]: [02/Dec/2025:09:57:16] ENGINE Serving on https://172.18.0.107:7150 Dec 2 04:57:18 localhost ceph-mon[298296]: [02/Dec/2025:09:57:16] ENGINE Bus STARTED Dec 2 04:57:18 localhost ceph-mon[298296]: [02/Dec/2025:09:57:16] ENGINE Client ('172.18.0.107', 38106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:57:18 localhost ceph-mon[298296]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 2 04:57:18 localhost ceph-mon[298296]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 2 04:57:18 localhost ceph-mon[298296]: Cluster is now healthy Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:18 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:18 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:18 localhost nova_compute[281854]: 2025-12-02 09:57:18.790 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:19 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:19 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:20 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:20 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:20 localhost ceph-mon[298296]: Saving service mon spec with placement label:mon Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:57:20 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:57:20 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:57:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:21 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e11 handle_auth_request failed to assign global_id Dec 2 04:57:21 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:21 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:21 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev 8f464c39-1004-431c-b9c0-0812e6f27fcc (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:21 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev 8f464c39-1004-431c-b9c0-0812e6f27fcc (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:21 localhost ceph-mgr[288059]: [progress INFO root] Completed event 8f464c39-1004-431c-b9c0-0812e6f27fcc (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:57:22 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:57:22 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:57:22 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:22 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:22 localhost nova_compute[281854]: 2025-12-02 09:57:22.531 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:22 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Dec 2 04:57:22 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:22 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:22 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:22 localhost ceph-mon[298296]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:22 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:22 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:22 localhost ceph-mon[298296]: Reconfiguring mon.np0005541911 (monmap changed)... Dec 2 04:57:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:22 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:23 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:57:23 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:57:23 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:57:23 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:57:23 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:23 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:23 localhost nova_compute[281854]: 2025-12-02 09:57:23.792 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:23 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:57:23 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:57:24 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 13 op/s Dec 2 04:57:24 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:24 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:25 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:57:25 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:57:25 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:57:25 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:25 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:26 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:57:26 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:57:26 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:57:26 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:57:26 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 2 04:57:26 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:26 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:26 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:57:26 localhost ceph-mon[298296]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:57:26 localhost ceph-mon[298296]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:57:26 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:27 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev 4a745475-b94b-4d1e-a5ff-bbe9d0eeddf1 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:27 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev 4a745475-b94b-4d1e-a5ff-bbe9d0eeddf1 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:27 localhost ceph-mgr[288059]: [progress INFO root] Completed event 4a745475-b94b-4d1e-a5ff-bbe9d0eeddf1 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:57:27 localhost nova_compute[281854]: 2025-12-02 09:57:27.533 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:27 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:27 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:28 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34423 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541911", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:57:28 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 04:57:28 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:28 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:28 localhost nova_compute[281854]: 2025-12-02 09:57:28.795 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:57:29 localhost systemd[1]: tmp-crun.DVDNEs.mount: Deactivated successfully. Dec 2 04:57:29 localhost podman[300682]: 2025-12-02 09:57:29.481346046 +0000 UTC m=+0.109899862 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3) Dec 2 04:57:29 localhost podman[300682]: 2025-12-02 09:57:29.496135931 +0000 UTC m=+0.124689767 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:57:29 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.26982 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541911"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO root] Remove daemons mon.np0005541911 Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541911 Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912'] (from ['np0005541914', 'np0005541912']) Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912'] (from ['np0005541914', 'np0005541912']) Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541911 from monmap... Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing monitor np0005541911 from monmap... Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:29 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:29 localhost ceph-mgr[288059]: client.44339 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:57:29 localhost ceph-mgr[288059]: client.44339 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[288059]: client.26949 ms_handle_reset on v2:172.18.0.108:3300/0 Dec 2 04:57:29 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:29 localhost sshd[300700]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:29 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:57:30 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:30 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 04:57:30 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:30 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:30 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:30 localhost ceph-mon[298296]: Remove daemons mon.np0005541911 Dec 2 04:57:30 localhost ceph-mon[298296]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912'] (from ['np0005541914', 'np0005541912']) Dec 2 04:57:30 localhost ceph-mon[298296]: Removing monitor np0005541911 from monmap... Dec 2 04:57:30 localhost ceph-mon[298296]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:30 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:30 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:30 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:30 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:30 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:57:30 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:31 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:31 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:32 localhost nova_compute[281854]: 2025-12-02 09:57:32.540 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:32 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Dec 2 04:57:32 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:32 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:32 localhost ceph-mon[298296]: mon.np0005541913@-1(probing) e13 my rank is now 2 (was -1) Dec 2 04:57:32 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:57:32 localhost ceph-mon[298296]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Dec 2 04:57:32 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:33 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:33 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:33 localhost nova_compute[281854]: 2025-12-02 09:57:33.797 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:34 localhost openstack_network_exporter[242845]: ERROR 09:57:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:34 localhost openstack_network_exporter[242845]: ERROR 09:57:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:57:34 localhost openstack_network_exporter[242845]: ERROR 09:57:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:57:34 localhost openstack_network_exporter[242845]: ERROR 09:57:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:57:34 localhost openstack_network_exporter[242845]: Dec 2 04:57:34 localhost openstack_network_exporter[242845]: ERROR 09:57:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:57:34 localhost openstack_network_exporter[242845]: Dec 2 04:57:34 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:34 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:34 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:35 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:35 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:35 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 handle_auth_request failed to assign global_id Dec 2 04:57:36 localhost podman[240799]: time="2025-12-02T09:57:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:57:36 localhost podman[240799]: @ - - [02/Dec/2025:09:57:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:57:36 localhost podman[240799]: @ - - [02/Dec/2025:09:57:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18714 "" "Go-http-client/1.1" Dec 2 04:57:36 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev d755f4ef-61cb-47a4-8905-95367e9e015f (Updating mon deployment (+1 -> 4)) Dec 2 04:57:36 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:36 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34428 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:36 localhost ceph-mgr[288059]: [cephadm INFO root] Removed label mon from host np0005541911.localdomain Dec 2 04:57:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removed label mon from host np0005541911.localdomain Dec 2 04:57:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:57:36 localhost podman[301022]: 2025-12-02 09:57:36.448777346 +0000 UTC m=+0.087278286 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:57:36 localhost podman[301022]: 2025-12-02 09:57:36.458024563 +0000 UTC m=+0.096525503 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 04:57:36 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:57:36 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:36 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:36 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 handle_auth_request failed to assign global_id Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 handle_auth_request failed to assign global_id Dec 2 04:57:37 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34431 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:37 localhost ceph-mgr[288059]: [cephadm INFO root] Removed label mgr from host np0005541911.localdomain Dec 2 04:57:37 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005541911.localdomain Dec 2 04:57:37 localhost nova_compute[281854]: 2025-12-02 09:57:37.580 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:37 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:37 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541913: (22) Invalid argument Dec 2 04:57:37 localhost ceph-mon[298296]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:37 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:37 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:37 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:37 localhost ceph-mon[298296]: Health check failed: 1/3 mons down, quorum np0005541914,np0005541912 (MON_DOWN) Dec 2 04:57:37 localhost ceph-mon[298296]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:37 localhost ceph-mon[298296]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005541914,np0005541912 Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum) Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:57:37 localhost ceph-mon[298296]: Deploying daemon mon.np0005541911 on np0005541911.localdomain Dec 2 04:57:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:37 localhost ceph-mon[298296]: Removed label mon from host np0005541911.localdomain Dec 2 04:57:37 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:57:37 localhost ceph-mon[298296]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 2 04:57:37 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:37 localhost ceph-mon[298296]: mgrc update_daemon_metadata mon.np0005541913 metadata {addrs=[v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541913.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541913.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 2 04:57:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:57:38 localhost podman[301040]: 2025-12-02 09:57:38.451422424 +0000 UTC m=+0.090621785 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 handle_auth_request failed to assign global_id Dec 2 04:57:38 localhost podman[301040]: 2025-12-02 09:57:38.468051169 +0000 UTC m=+0.107250540 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Dec 2 04:57:38 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:57:38 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev d755f4ef-61cb-47a4-8905-95367e9e015f (Updating mon deployment (+1 -> 4)) Dec 2 04:57:38 localhost ceph-mgr[288059]: [progress INFO root] Completed event d755f4ef-61cb-47a4-8905-95367e9e015f (Updating mon deployment (+1 -> 4)) in 2 seconds Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:38 localhost ceph-mon[298296]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912) Dec 2 04:57:38 localhost ceph-mon[298296]: Cluster is now healthy Dec 2 04:57:38 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:57:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:38 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev 06c59072-bd04-4a28-9ce6-b5cc50ffbfcc (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:38 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev 06c59072-bd04-4a28-9ce6-b5cc50ffbfcc (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:38 localhost ceph-mgr[288059]: [progress INFO root] Completed event 06c59072-bd04-4a28-9ce6-b5cc50ffbfcc (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:57:38 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:38 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.44380 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:38 localhost ceph-mgr[288059]: [cephadm INFO root] Removed label _admin from host np0005541911.localdomain Dec 2 04:57:38 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005541911.localdomain Dec 2 04:57:38 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect) Dec 2 04:57:38 localhost nova_compute[281854]: 2025-12-02 09:57:38.800 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:38 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e13 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:57:38 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:38 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (2) No such file or directory Dec 2 04:57:39 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:39 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:57:39 localhost ceph-mon[298296]: paxos.2).electionLogic(52) init, last seen epoch 52 Dec 2 04:57:39 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:57:39 localhost podman[301078]: 2025-12-02 09:57:39.449227824 +0000 UTC m=+0.088895940 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:57:39 localhost podman[301078]: 2025-12-02 09:57:39.458856113 +0000 UTC m=+0.098524159 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:57:39 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:57:39 localhost ceph-mgr[288059]: mgr.server handle_report got status from non-daemon mon.np0005541913 Dec 2 04:57:39 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:39.756+0000 7f783c290640 -1 mgr.server handle_report got status from non-daemon mon.np0005541913 Dec 2 04:57:39 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:39 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:40 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:57:40 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:40 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:40 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:41 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:41 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:42 localhost nova_compute[281854]: 2025-12-02 09:57:42.614 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:42 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:42 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:42 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:43 localhost nova_compute[281854]: 2025-12-02 09:57:43.804 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:43 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:43 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541911: (22) Invalid argument Dec 2 04:57:44 localhost ceph-mon[298296]: paxos.2).electionLogic(53) init, last seen epoch 53, mid-election, bumping Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:44 localhost ceph-mon[298296]: Removed label _admin from host np0005541911.localdomain Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541911 calling monitor election Dec 2 04:57:44 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3) Dec 2 04:57:44 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:57:44 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:44 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541911 172.18.0.105:0/480338482; not ready for session (expect reconnect) Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:57:45 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:57:45 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:45 localhost ceph-mon[298296]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:45 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:45 localhost ceph-mon[298296]: Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:57:45 localhost ceph-mon[298296]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:57:45 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:57:45 localhost podman[301404]: 2025-12-02 09:57:45.566820463 +0000 UTC m=+0.064228029 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:57:45 localhost podman[301404]: 2025-12-02 09:57:45.580180851 +0000 UTC m=+0.077588467 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:57:45 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:57:45 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev 6600d02e-864d-4dee-93ce-68068586e2f3 (Updating mgr deployment (-1 -> 3)) Dec 2 04:57:45 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765] Dec 2 04:57:45 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765] Dec 2 04:57:45 localhost ceph-mon[298296]: mon.np0005541913@2(peon).osd e89 _set_new_cache_sizes cache_size:1019640581 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:45 localhost ceph-mgr[288059]: mgr.server handle_report got status from non-daemon mon.np0005541911 Dec 2 04:57:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:57:45.997+0000 7f783c290640 -1 mgr.server handle_report got status from non-daemon mon.np0005541911 Dec 2 04:57:46 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:46 localhost nova_compute[281854]: 2025-12-02 09:57:46.628 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:46 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:46 localhost nova_compute[281854]: 2025-12-02 09:57:46.890 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:46 localhost nova_compute[281854]: 2025-12-02 09:57:46.891 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:57:46 localhost nova_compute[281854]: 2025-12-02 09:57:46.891 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:57:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:57:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:57:47 localhost podman[301440]: 2025-12-02 09:57:47.44832009 +0000 UTC m=+0.084539594 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 04:57:47 localhost podman[301440]: 2025-12-02 09:57:47.457913786 +0000 UTC m=+0.094133310 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:57:47 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:57:47 localhost ceph-mon[298296]: Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765] Dec 2 04:57:47 localhost podman[301441]: 2025-12-02 09:57:47.553526635 +0000 UTC m=+0.186653326 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible) Dec 2 04:57:47 localhost nova_compute[281854]: 2025-12-02 09:57:47.616 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:47 localhost podman[301441]: 2025-12-02 09:57:47.617055245 +0000 UTC m=+0.250181996 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller) Dec 2 04:57:47 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:57:47 localhost nova_compute[281854]: 2025-12-02 09:57:47.677 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:57:47 localhost nova_compute[281854]: 2025-12-02 09:57:47.677 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:57:47 localhost nova_compute[281854]: 2025-12-02 09:57:47.678 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:57:47 localhost nova_compute[281854]: 2025-12-02 09:57:47.678 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:57:47 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005541911.adcgiw Dec 2 04:57:47 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005541911.adcgiw Dec 2 04:57:47 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev 6600d02e-864d-4dee-93ce-68068586e2f3 (Updating mgr deployment (-1 -> 3)) Dec 2 04:57:47 localhost ceph-mgr[288059]: [progress INFO root] Completed event 6600d02e-864d-4dee-93ce-68068586e2f3 (Updating mgr deployment (-1 -> 3)) in 2 seconds Dec 2 04:57:47 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev cf2d4045-cc42-456b-be11-4a88f1bde21b (Updating mon deployment (-1 -> 3)) Dec 2 04:57:47 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913']) Dec 2 04:57:47 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913']) Dec 2 04:57:47 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541911 from monmap... Dec 2 04:57:47 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing monitor np0005541911 from monmap... Dec 2 04:57:47 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:47 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:47 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:57:47 localhost ceph-mon[298296]: paxos.2).electionLogic(56) init, last seen epoch 56 Dec 2 04:57:47 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:48 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:48 localhost nova_compute[281854]: 2025-12-02 09:57:48.806 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:49 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.131 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.213 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.213 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.214 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.215 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.215 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.216 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.217 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:49 localhost nova_compute[281854]: 2025-12-02 09:57:49.839 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:50 localhost nova_compute[281854]: 2025-12-02 09:57:50.595 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:50 localhost nova_compute[281854]: 2025-12-02 09:57:50.596 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:50 localhost nova_compute[281854]: 2025-12-02 09:57:50.596 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:50 localhost nova_compute[281854]: 2025-12-02 09:57:50.596 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:57:50 localhost nova_compute[281854]: 2025-12-02 09:57:50.597 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:57:50 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:50 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:50 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:51 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:52 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 handle_auth_request failed to assign global_id Dec 2 04:57:52 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:52 localhost nova_compute[281854]: 2025-12-02 09:57:52.668 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:52 localhost ceph-mon[298296]: paxos.2).electionLogic(57) init, last seen epoch 57, mid-election, bumping Dec 2 04:57:52 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:52 localhost ceph-mon[298296]: mon.np0005541913@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:52 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:57:52 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev cf2d4045-cc42-456b-be11-4a88f1bde21b (Updating mon deployment (-1 -> 3)) Dec 2 04:57:52 localhost ceph-mgr[288059]: [progress INFO root] Completed event cf2d4045-cc42-456b-be11-4a88f1bde21b (Updating mon deployment (-1 -> 3)) in 5 seconds Dec 2 04:57:53 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev 151b76ef-7ac5-4a24-9563-da569f49d4b6 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:53 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev 151b76ef-7ac5-4a24-9563-da569f49d4b6 (Updating node-proxy deployment (+4 -> 4)) Dec 2 04:57:53 localhost ceph-mgr[288059]: [progress INFO root] Completed event 151b76ef-7ac5-4a24-9563-da569f49d4b6 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 2 04:57:53 localhost ceph-mon[298296]: Removing key for mgr.np0005541911.adcgiw Dec 2 04:57:53 localhost ceph-mon[298296]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913']) Dec 2 04:57:53 localhost ceph-mon[298296]: Removing monitor np0005541911 from monmap... Dec 2 04:57:53 localhost ceph-mon[298296]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports [] Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1) Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:57:53 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:57:53 localhost ceph-mon[298296]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2) Dec 2 04:57:53 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:57:53 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:53 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.54101 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541911.localdomain", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:53 localhost ceph-mgr[288059]: [cephadm INFO root] Added label _no_schedule to host np0005541911.localdomain Dec 2 04:57:53 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005541911.localdomain Dec 2 04:57:53 localhost ceph-mgr[288059]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain Dec 2 04:57:53 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain Dec 2 04:57:53 localhost nova_compute[281854]: 2025-12-02 09:57:53.832 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:54 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[298296]: Added label _no_schedule to host np0005541911.localdomain Dec 2 04:57:54 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:54 localhost ceph-mon[298296]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain Dec 2 04:57:54 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:57:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2425549797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.065 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.128 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.129 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.350 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.352 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11678MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.353 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.354 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:57:54 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.54116 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541911.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.627 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.628 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.628 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:57:54 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.690 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 04:57:54 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.743 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.744 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.756 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.774 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 04:57:54 localhost nova_compute[281854]: 2025-12-02 09:57:54.810 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:57:55 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:55 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:57:55 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:57:55 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1003759709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.268 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.274 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.292 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.295 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.296 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.296 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.296 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.313 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.313 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:55 localhost nova_compute[281854]: 2025-12-02 09:57:55.313 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 04:57:55 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:55 localhost ceph-mon[298296]: mon.np0005541913@2(peon).osd e89 _set_new_cache_sizes cache_size:1020047553 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:57:55 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34455 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541911.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:57:55 localhost ceph-mgr[288059]: [cephadm INFO root] Removed host np0005541911.localdomain Dec 2 04:57:55 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removed host np0005541911.localdomain Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:56 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev d67aba63-6539-453f-9253-9dfdf27f5a23 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:57:56 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev d67aba63-6539-453f-9253-9dfdf27f5a23 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:57:56 localhost ceph-mgr[288059]: [progress INFO root] Completed event d67aba63-6539-453f-9253-9dfdf27f5a23 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 04:57:56 localhost nova_compute[281854]: 2025-12-02 09:57:56.313 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:56 localhost nova_compute[281854]: 2025-12-02 09:57:56.314 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:56 localhost nova_compute[281854]: 2025-12-02 09:57:56.314 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:56 localhost nova_compute[281854]: 2025-12-02 09:57:56.315 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:57:56 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:57:56 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:57:56 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:57:56 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:57:56 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:57 localhost ceph-mon[298296]: Removed host np0005541911.localdomain Dec 2 04:57:57 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:57 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:57:57 localhost ceph-mon[298296]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:57:57 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:57:57 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:57:57 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 2 04:57:57 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 2 04:57:57 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:57:57 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:57:57 localhost nova_compute[281854]: 2025-12-02 09:57:57.713 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:57 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:57:58 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 2 04:57:58 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 2 04:57:58 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:57:58 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:57:58 localhost ceph-mon[298296]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:57:58 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:57:58 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:57:58 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:57:58 localhost nova_compute[281854]: 2025-12-02 09:57:58.877 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:57:59 localhost ceph-mon[298296]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:57:59 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:57:59 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:57:59 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:57:59 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:57:59 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:00 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:00 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:00 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:00 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:58:00 localhost podman[301889]: 2025-12-02 09:58:00.438867668 +0000 UTC m=+0.075978274 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:00 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:00 localhost podman[301889]: 2025-12-02 09:58:00.455134374 +0000 UTC m=+0.092245030 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 2 04:58:00 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:58:00 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:00 localhost ceph-mon[298296]: mon.np0005541913@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054604 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:01 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:01 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:01 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:58:01 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:58:01 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:58:01 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:58:02 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:02 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:02 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:02 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:02 localhost ceph-mon[298296]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:58:02 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:02 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:02 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:02 localhost nova_compute[281854]: 2025-12-02 09:58:02.767 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:02 localhost podman[301962]: Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:03.012631938 +0000 UTC m=+0.085710584 container create 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:58:03.042 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:58:03.043 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:58:03.043 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:03 localhost systemd[1]: Started libpod-conmon-177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21.scope. Dec 2 04:58:03 localhost systemd[1]: Started libcrun container. Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:03.071648388 +0000 UTC m=+0.144727004 container init 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, architecture=x86_64, release=1763362218, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., RELEASE=main) Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:02.975034562 +0000 UTC m=+0.048113228 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:03 localhost systemd[1]: tmp-crun.vbUCFm.mount: Deactivated successfully. Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:03.080571257 +0000 UTC m=+0.153649893 container start 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, ceph=True, io.buildah.version=1.41.4, name=rhceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:03.080815283 +0000 UTC m=+0.153893919 container attach 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:03 localhost systemd[1]: libpod-177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21.scope: Deactivated successfully. Dec 2 04:58:03 localhost sad_nightingale[301977]: 167 167 Dec 2 04:58:03 localhost podman[301962]: 2025-12-02 09:58:03.083269939 +0000 UTC m=+0.156348555 container died 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git) Dec 2 04:58:03 localhost podman[301982]: 2025-12-02 09:58:03.165765316 +0000 UTC m=+0.076045986 container remove 177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_nightingale, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.41.4, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True) Dec 2 04:58:03 localhost systemd[1]: libpod-conmon-177ccc4e336d401093607b1140c85bb334cbbad92533297013beb877093cbc21.scope: Deactivated successfully. Dec 2 04:58:03 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:03 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:03 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:03 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:03 localhost ceph-mon[298296]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:03 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:03 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:03 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:03 localhost podman[302050]: Dec 2 04:58:03 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.34458 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:58:03 localhost ceph-mgr[288059]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 2 04:58:03 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.860470636 +0000 UTC m=+0.061655741 container create dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, RELEASE=main, release=1763362218, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph) Dec 2 04:58:03 localhost nova_compute[281854]: 2025-12-02 09:58:03.911 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:03 localhost systemd[1]: Started libpod-conmon-dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b.scope. Dec 2 04:58:03 localhost systemd[1]: Started libcrun container. Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.842709171 +0000 UTC m=+0.043894286 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.955829057 +0000 UTC m=+0.157014152 container init dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_CLEAN=True) Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.963602095 +0000 UTC m=+0.164787220 container start dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git) Dec 2 04:58:03 localhost compassionate_clarke[302065]: 167 167 Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.965416074 +0000 UTC m=+0.166601229 container attach dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public) Dec 2 04:58:03 localhost systemd[1]: libpod-dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b.scope: Deactivated successfully. Dec 2 04:58:03 localhost podman[302050]: 2025-12-02 09:58:03.967312074 +0000 UTC m=+0.168497209 container died dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph) Dec 2 04:58:04 localhost systemd[1]: var-lib-containers-storage-overlay-bd9d6e487cb31c04755d1ff5516e54b619ec9e78adb0e55268308706e703e260-merged.mount: Deactivated successfully. Dec 2 04:58:04 localhost systemd[1]: var-lib-containers-storage-overlay-6a5b75bee3978808b04bcfe9431e5596a747ae3099fb3ef6c2978ab18db9e2ed-merged.mount: Deactivated successfully. Dec 2 04:58:04 localhost openstack_network_exporter[242845]: ERROR 09:58:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:58:04 localhost openstack_network_exporter[242845]: ERROR 09:58:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:04 localhost openstack_network_exporter[242845]: ERROR 09:58:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:04 localhost openstack_network_exporter[242845]: ERROR 09:58:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:58:04 localhost openstack_network_exporter[242845]: Dec 2 04:58:04 localhost openstack_network_exporter[242845]: ERROR 09:58:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:58:04 localhost openstack_network_exporter[242845]: Dec 2 04:58:04 localhost podman[302070]: 2025-12-02 09:58:04.070635639 +0000 UTC m=+0.094391457 container remove dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_clarke, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=) Dec 2 04:58:04 localhost systemd[1]: libpod-conmon-dbbf4e9bcf1f79f2fe2aa09bf07261ca44d6be15391406d489322dfe9146805b.scope: Deactivated successfully. Dec 2 04:58:04 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:04 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:04 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:04 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:04 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 04:58:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 04:58:04 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 04:58:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 04:58:04 localhost ceph-mon[298296]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:04 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:04 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:04 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:04 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.641151) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484641203, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1882, "num_deletes": 283, "total_data_size": 4743595, "memory_usage": 4961392, "flush_reason": "Manual Compaction"} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484668156, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3555029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11598, "largest_seqno": 13479, "table_properties": {"data_size": 3546648, "index_size": 4630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23990, "raw_average_key_size": 22, "raw_value_size": 3527072, "raw_average_value_size": 3296, "num_data_blocks": 194, "num_entries": 1070, "num_filter_entries": 1070, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669438, "oldest_key_time": 1764669438, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 27059 microseconds, and 9071 cpu microseconds. Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.668208) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3555029 bytes OK Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.668236) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.670200) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.670223) EVENT_LOG_v1 {"time_micros": 1764669484670216, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.670249) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4732778, prev total WAL file size 4732778, number of live WAL files 2. Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.671357) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323839' seq:72057594037927935, type:22 .. '6B760031353530' seq:0, type:0; will stop at (end) Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3471KB)], [15(15MB)] Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484671440, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 19655172, "oldest_snapshot_seqno": -1} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11403 keys, 18677977 bytes, temperature: kUnknown Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484793801, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18677977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18609805, "index_size": 38567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 305959, "raw_average_key_size": 26, "raw_value_size": 18411853, "raw_average_value_size": 1614, "num_data_blocks": 1472, "num_entries": 11403, "num_filter_entries": 11403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.794145) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18677977 bytes Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.796108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 152.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 15.4 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(10.8) write-amplify(5.3) OK, records in: 11916, records dropped: 513 output_compression: NoCompression Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.796132) EVENT_LOG_v1 {"time_micros": 1764669484796120, "job": 6, "event": "compaction_finished", "compaction_time_micros": 122477, "compaction_time_cpu_micros": 49487, "output_level": 6, "num_output_files": 1, "total_output_size": 18677977, "num_input_records": 11916, "num_output_records": 11403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484796731, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484798363, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.671218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.798460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.798470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.798473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.798476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:04.798478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:04 localhost podman[302145]: Dec 2 04:58:04 localhost podman[302145]: 2025-12-02 09:58:04.928981798 +0000 UTC m=+0.080847625 container create 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Dec 2 04:58:04 localhost systemd[1]: Started libpod-conmon-934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311.scope. Dec 2 04:58:04 localhost systemd[1]: Started libcrun container. Dec 2 04:58:04 localhost podman[302145]: 2025-12-02 09:58:04.99332361 +0000 UTC m=+0.145189367 container init 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_CLEAN=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-11-26T19:44:28Z) Dec 2 04:58:04 localhost podman[302145]: 2025-12-02 09:58:04.896688653 +0000 UTC m=+0.048554470 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:05 localhost podman[302145]: 2025-12-02 09:58:05.002103755 +0000 UTC m=+0.153969512 container start 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main) Dec 2 04:58:05 localhost podman[302145]: 2025-12-02 09:58:05.002255559 +0000 UTC m=+0.154121316 container attach 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, GIT_CLEAN=True, vcs-type=git) Dec 2 04:58:05 localhost elated_booth[302159]: 167 167 Dec 2 04:58:05 localhost systemd[1]: libpod-934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311.scope: Deactivated successfully. Dec 2 04:58:05 localhost podman[302145]: 2025-12-02 09:58:05.006883692 +0000 UTC m=+0.158749549 container died 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, architecture=x86_64, ceph=True, release=1763362218, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main) Dec 2 04:58:05 localhost systemd[1]: var-lib-containers-storage-overlay-c33e50fd7f93b8f94a634a4ca052d5b58d0a040d2bd768832d781517f963fa6a-merged.mount: Deactivated successfully. Dec 2 04:58:05 localhost podman[302164]: 2025-12-02 09:58:05.109457967 +0000 UTC m=+0.088416377 container remove 934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_booth, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git) Dec 2 04:58:05 localhost systemd[1]: libpod-conmon-934816128e30bbff845bac51c03aee07ad177a04c6c089934ffdc11b2214d311.scope: Deactivated successfully. Dec 2 04:58:05 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.54149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541914", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 2 04:58:05 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:05 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:05 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:05 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:05 localhost ceph-mon[298296]: Saving service mon spec with placement label:mon Dec 2 04:58:05 localhost ceph-mon[298296]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:05 localhost ceph-mon[298296]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:05 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:05 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:05 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:05 localhost ceph-mon[298296]: mon.np0005541913@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054729 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:06 localhost podman[240799]: time="2025-12-02T09:58:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:58:06 localhost podman[240799]: @ - - [02/Dec/2025:09:58:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:58:06 localhost podman[240799]: @ - - [02/Dec/2025:09:58:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18717 "" "Go-http-client/1.1" Dec 2 04:58:06 localhost podman[302243]: Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.147214315 +0000 UTC m=+0.143420458 container create e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Dec 2 04:58:06 localhost systemd[1]: Started libpod-conmon-e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f.scope. Dec 2 04:58:06 localhost systemd[1]: Started libcrun container. Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.115164488 +0000 UTC m=+0.111370671 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.221714829 +0000 UTC m=+0.217920992 container init e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=) Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.232540089 +0000 UTC m=+0.228746222 container start e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.232890678 +0000 UTC m=+0.229096811 container attach e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 2 04:58:06 localhost recursing_khayyam[302258]: 167 167 Dec 2 04:58:06 localhost systemd[1]: libpod-e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f.scope: Deactivated successfully. Dec 2 04:58:06 localhost podman[302243]: 2025-12-02 09:58:06.237715518 +0000 UTC m=+0.233921651 container died e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 04:58:06 localhost podman[302263]: 2025-12-02 09:58:06.345582224 +0000 UTC m=+0.093408651 container remove e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_khayyam, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:06 localhost systemd[1]: libpod-conmon-e5f9cd4f03f99397d6af1454563a2adea60726b577981d31ac2b2fce09a12e6f.scope: Deactivated successfully. Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:06 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:06 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:06 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:06 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:06 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:06 localhost podman[302296]: 2025-12-02 09:58:06.675082161 +0000 UTC m=+0.100578712 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:58:06 localhost podman[302296]: 2025-12-02 09:58:06.712230085 +0000 UTC m=+0.137726596 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 04:58:06 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.44434 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541914"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO root] Remove daemons mon.np0005541914 Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541914 Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541914: new quorum should be ['np0005541912', 'np0005541913'] (from ['np0005541912', 'np0005541913']) Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541914: new quorum should be ['np0005541912', 'np0005541913'] (from ['np0005541912', 'np0005541913']) Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541914 from monmap... Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing monitor np0005541914 from monmap... Dec 2 04:58:06 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541914 from np0005541914.localdomain -- ports [] Dec 2 04:58:06 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541914 from np0005541914.localdomain -- ports [] Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@2(peon) e16 my rank is now 1 (was 2) Dec 2 04:58:06 localhost ceph-mgr[288059]: client.44339 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mgr[288059]: client.26949 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@1(probing) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:58:06 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:58:06 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:58:06 localhost ceph-mon[298296]: paxos.1).electionLogic(62) init, last seen epoch 62 Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:58:06 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:58:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:07 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:07 localhost ceph-mon[298296]: Remove daemons mon.np0005541914 Dec 2 04:58:07 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541914"} : dispatch Dec 2 04:58:07 localhost ceph-mon[298296]: Safe to remove mon.np0005541914: new quorum should be ['np0005541912', 'np0005541913'] (from ['np0005541912', 'np0005541913']) Dec 2 04:58:07 localhost ceph-mon[298296]: Removing monitor np0005541914 from monmap... Dec 2 04:58:07 localhost ceph-mon[298296]: Removing daemon mon.np0005541914 from np0005541914.localdomain -- ports [] Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541912 is new leader, mons np0005541912,np0005541913 in quorum (ranks 0,1) Dec 2 04:58:07 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:58:07 localhost systemd[1]: var-lib-containers-storage-overlay-eb82e691c46520bece461b33e7dd2f630c337784f48462c20df3075c80f0bed0-merged.mount: Deactivated successfully. Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.11942062 +0000 UTC m=+0.049910295 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:07 localhost podman[302351]: Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.319788563 +0000 UTC m=+0.250278178 container create b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 04:58:07 localhost systemd[1]: Started libpod-conmon-b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7.scope. Dec 2 04:58:07 localhost systemd[1]: Started libcrun container. Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.37650874 +0000 UTC m=+0.306998365 container init b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.386494818 +0000 UTC m=+0.316984443 container start b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph) Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.386701383 +0000 UTC m=+0.317191008 container attach b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Dec 2 04:58:07 localhost wonderful_driscoll[302365]: 167 167 Dec 2 04:58:07 localhost systemd[1]: libpod-b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7.scope: Deactivated successfully. Dec 2 04:58:07 localhost podman[302351]: 2025-12-02 09:58:07.392893449 +0000 UTC m=+0.323383064 container died b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:07 localhost podman[302370]: 2025-12-02 09:58:07.48486495 +0000 UTC m=+0.079971992 container remove b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_driscoll, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z) Dec 2 04:58:07 localhost systemd[1]: libpod-conmon-b8490e4f98024b09e7349bb7c81fe786447b4dd6621857380c291c518497ecd7.scope: Deactivated successfully. Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:07 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:07 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:07 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:07 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:07 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:07 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:07 localhost nova_compute[281854]: 2025-12-02 09:58:07.804 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:08 localhost systemd[1]: tmp-crun.pogVUv.mount: Deactivated successfully. Dec 2 04:58:08 localhost systemd[1]: var-lib-containers-storage-overlay-3b3ca6abf0417dfd1a39b7a7b95701977844c5656b6c733d8017eeb759194dbf-merged.mount: Deactivated successfully. Dec 2 04:58:08 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:08 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:08 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:08 localhost ceph-mon[298296]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:08 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:08 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:08 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:08 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:08 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 2 04:58:08 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:58:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:08 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:08 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:08 localhost nova_compute[281854]: 2025-12-02 09:58:08.956 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:58:09 localhost podman[302387]: 2025-12-02 09:58:09.169950681 +0000 UTC m=+0.084852001 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public) Dec 2 04:58:09 localhost podman[302387]: 2025-12-02 09:58:09.189813134 +0000 UTC m=+0.104714464 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 04:58:09 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:58:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:09 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:09 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:09 localhost ceph-mon[298296]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:09 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:58:09 localhost ceph-mon[298296]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:09 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:09 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:09 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 2 04:58:09 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:58:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:09 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:09 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:09 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:58:10 localhost podman[302407]: 2025-12-02 09:58:10.446010777 +0000 UTC m=+0.083070694 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:58:10 localhost podman[302407]: 2025-12-02 09:58:10.458971494 +0000 UTC m=+0.096031411 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:58:10 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:58:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:10 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:10 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:58:10 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:10 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:10 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:10 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:10 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:10 localhost ceph-mon[298296]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:58:10 localhost ceph-mon[298296]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:10 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:11 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:11 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:11 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:58:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:58:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:11 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:11 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:12 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:12 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:12 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:12 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:12 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:12 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:12 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:12 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:12 localhost nova_compute[281854]: 2025-12-02 09:58:12.877 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:13 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:58:13 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:13 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:13 localhost nova_compute[281854]: 2025-12-02 09:58:13.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:14 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:14 localhost ceph-mgr[288059]: [balancer INFO root] Optimize plan auto_2025-12-02_09:58:14 Dec 2 04:58:14 localhost ceph-mgr[288059]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 2 04:58:14 localhost ceph-mgr[288059]: [balancer INFO root] do_upmap Dec 2 04:58:14 localhost ceph-mgr[288059]: [balancer INFO root] pools ['images', '.mgr', 'manila_metadata', 'vms', 'manila_data', 'volumes', 'backups'] Dec 2 04:58:14 localhost ceph-mgr[288059]: [balancer INFO root] prepared 0/10 changes Dec 2 04:58:14 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] _maybe_adjust Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 2 04:58:15 localhost ceph-mgr[288059]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] scanning for idle connections.. Dec 2 04:58:15 localhost ceph-mgr[288059]: [volumes INFO mgr_util] cleaning up connections: [] Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: images, start_after= Dec 2 04:58:15 localhost ceph-mgr[288059]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:15 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:15 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:58:15 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev e3621e36-1451-4294-bcba-377dbfd809b2 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:58:15 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev e3621e36-1451-4294-bcba-377dbfd809b2 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:58:15 localhost ceph-mgr[288059]: [progress INFO root] Completed event e3621e36-1451-4294-bcba-377dbfd809b2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:58:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:58:15 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:15 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:15 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:15 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:15 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:15 localhost podman[302768]: 2025-12-02 09:58:15.800528605 +0000 UTC m=+0.086087153 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:58:15 localhost podman[302768]: 2025-12-02 09:58:15.81899137 +0000 UTC m=+0.104549938 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:58:15 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.104 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.105 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.136 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.137 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2a6458d9-88ba-458b-9149-431f1bf926d7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.105820', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2325c4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'b46e31a5b3663171f380b99df0e98f1138a8a54a4fff4b93a0941cdb9da02a54'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.105820', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c233d48-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '8014f02d9f821a9c491bdd689599d057764c955b2d72d4874447710b4d14b49b'}]}, 'timestamp': '2025-12-02 09:58:16.138303', '_unique_id': '4f4a8828cb9a467bb4e14622c64f7771'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.139 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.141 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.146 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2ec9a029-4200-412f-af18-127d6bb91dc3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.141284', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2495e4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': 'c552f2ce375524dee2acd5284e1109db1763988e0f26fcd330cfc3f85546c609'}]}, 'timestamp': '2025-12-02 09:58:16.147176', '_unique_id': '777f7b92af4d4c23ad48b5077dd6020d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.149 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.149 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.149 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0bd12528-eed9-440f-b690-5a8362836d54', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.149861', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c25135c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '099f28fc0260231bbc43f5e29d822301e32bb258153d67312e5973a5a8b4eb3c'}]}, 'timestamp': '2025-12-02 09:58:16.150351', '_unique_id': '35b4af476df04ca189cf0f9eb36b5bee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.151 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.152 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.152 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.152 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.153 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eb2b7538-0df1-4e3f-b3b9-71c0aebc2788', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.152809', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c258602-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'b0e51e21c83046b7dbc1da17eb672868e50b89940136f97dd315159c789f85eb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.152809', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2595e8-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '81d3e2a5527d8cb07e9aa4ed903cf91fb31c58ca8395586b2505415d00f2810f'}]}, 'timestamp': '2025-12-02 09:58:16.153707', '_unique_id': 'dbf22671b9e744aa8e29657ee94ecb4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.156 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.167 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '96f3876c-b006-4ba0-ad4a-767a842e90a1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.156494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c27c82c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': 'f3e2efc5190d06d75bdb3ef43a931a872aad0dff31a056ef4269f4913d7c1f5a'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.156494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c27d998-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': 'd77d9edb32a2168ada945ae9ebd944616141fb3ae6725d14d3778bac2d122b53'}]}, 'timestamp': '2025-12-02 09:58:16.168498', '_unique_id': '816ec79913204a3692e63154c3b7d375'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.170 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.170 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6ca412db-ce3e-4116-b99b-64e96af4cb97', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.170781', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c284482-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '4dd2810405d20b99e7f233d8c356e56a4a28b70549db75b8319f90cba2915ca4'}]}, 'timestamp': '2025-12-02 09:58:16.171306', '_unique_id': '237c7417d050472d8a47c53f1d46a7d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.173 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.191 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '507ca41d-92ad-4756-9886-35c2787f0e81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:58:16.173494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '6c2b6194-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.410034845, 'message_signature': '72802dd934055397578692510bca27d4041dfc0ba76b3d0b49128af5c087a71f'}]}, 'timestamp': '2025-12-02 09:58:16.191698', '_unique_id': 'a0b993f6ad474616b30c8cfeb6452c97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.194 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '298f19f2-371a-45a9-8772-70c3e3b347b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.194576', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2be77c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '48f3ecf8c47d68155da838a8b505f5f9140e66a2f01fd24584ad3fc8810aff3f'}]}, 'timestamp': '2025-12-02 09:58:16.195374', '_unique_id': '4e4ba8609bef4558ab77236ee14cf985'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.197 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e0235d8-f5da-40b5-944d-e67d9eabd97e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.197566', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2c5c52-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '19be324fe464ff250467d199bc0ede06b0ea369472aef0132b27f25e2f0958ac'}]}, 'timestamp': '2025-12-02 09:58:16.198080', '_unique_id': '79e9d8e89f994659ace165be72c81246'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.200 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5fc1463f-71e8-40f6-b49e-a5ca7ec4fe6c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.200357', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2cc714-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': 'bb9d38f5f86e4c6538b910516d63f35e31d6fba8f81d540ea6deac6aaeb3a58e'}]}, 'timestamp': '2025-12-02 09:58:16.200840', '_unique_id': 'f99dfa9184114834b53293a5f40e7bd9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.202 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 14590000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e68f05a0-a752-45f6-9397-c842123a1f96', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14590000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T09:58:16.202937', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '6c2d2d30-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.410034845, 'message_signature': '47428d6bb6db1bb65cf762a32614bd8c81eca71f27991b56e77034463f05e704'}]}, 'timestamp': '2025-12-02 09:58:16.203410', '_unique_id': 'cf4d4964cf78454a837e198e8fd46aa8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '55080ed0-c558-4f8e-aab8-35f82fcfe8c5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.204870', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2d7a88-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': 'f07e8a5c5593d2af49187c9580ba004166daa2c5dc97436c77913bfa3cd2d160'}]}, 'timestamp': '2025-12-02 09:58:16.205316', '_unique_id': '30acf3002da942b38805a60222317b44'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61946cb6-2b44-4c5e-b0ee-0b8275d0bf42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.206590', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2db868-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '6635af5ca6b9933d99a3272d10fae66658b4920b4223008e48b281a8b9444885'}]}, 'timestamp': '2025-12-02 09:58:16.206900', '_unique_id': '7cd1241fcee14c3fb14fc5e0f0c03b9f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.208 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9654b410-02f8-4bbc-9b02-20739e2c7c10', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.208189', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2df5b2-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '1688479c048809062f5234faa0bef87b033c6dc5593fc51f79e5cf969c032174'}]}, 'timestamp': '2025-12-02 09:58:16.208465', '_unique_id': 'b7e83225227e4b0da58fd66c394691ab'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d810936-eb54-461e-b6d3-3571b7591119', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.209819', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2e3522-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': 'ce7b0aab59e1e8656a4b1892afc97c1d0d4d2e436f53d3bc3951f05190b03af6'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.209819', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2e3e82-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': '59fb1f8cc89e8467f62d5ea05c1be75747abc2f9b9f302a521f7781648f08eb3'}]}, 'timestamp': '2025-12-02 09:58:16.210311', '_unique_id': '57a0a3aa4c89481493809c2d853eef55'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.211 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.211 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b69c64c2-5392-4c4a-922f-84a5d161b735', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T09:58:16.211607', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '6c2e7c08-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.360381316, 'message_signature': '026767cde7c5c83475b2908c1004e7a50bf77803110eaf6dad0d6f84a108b20c'}]}, 'timestamp': '2025-12-02 09:58:16.211910', '_unique_id': '2c91fe62ae2247eeb4ff5f8ba1b0549a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d626e27-42ab-4e58-869a-64135d5ec3ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.213144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2eb70e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'ef7b10423c4abc8320372d76636bb5ae7a6b5cd5f6a55beb87bb8ee27cab9d8b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.213144', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2ec06e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'a6405f00fdf831b964f03d0461b03c0e83d734ef17623f9bdabf58ca4c2daeb6'}]}, 'timestamp': '2025-12-02 09:58:16.213661', '_unique_id': '3c0677731c774e7a8b2f85f025bb107f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.214 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f4a1f80-493f-40af-a552-023b17ceeab2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.214932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2efd72-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '29b243fc3de3276ca1f208f1a48c26bd7d62f5a2241ddc421f40cfe07f4b9cce'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.214932', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2f06f0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'f7ff16a26245c52eb41c4867ec87cdf1c80f0230f1467af316c6d806718c2e54'}]}, 'timestamp': '2025-12-02 09:58:16.215444', '_unique_id': '68511aec8b2246219888296157afaf0d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b9bf2700-0458-4b6c-a31f-0d6d8c7f94e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.217028', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2f4f84-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': 'f9990d208416e4cdbfe864da646f4f6e98e0ff49f0c84187360c9e198422f4e5'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.217028', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2f5dda-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.375581492, 'message_signature': 'e1a4311c1b230949f6235538989479505267c6d3103d3d11ee2e1e32f3ae213f'}]}, 'timestamp': '2025-12-02 09:58:16.217707', '_unique_id': '6c1812423ef54b03868f3aee31341bfb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.219 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.219 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.219 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2a78e658-1584-4918-94c0-119ccfbc5530', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.219190', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2fa380-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '2d5286a2e3afcbe566e1102c29e188dc6dd2ce20b6a2b3c6ac2b4880f82257b8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.219190', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2fad08-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': 'de8eacc1da6b7c4c25f0009ce01e53b541bb90b7f1a3fa7f8fe5ece2166cdfed'}]}, 'timestamp': '2025-12-02 09:58:16.219719', '_unique_id': '87de4fc210854e84bfd9b2604867a6db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.221 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.221 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d4f3aee-9f63-4875-921a-6a2d60fd3ce4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T09:58:16.221009', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6c2fea5c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '7773ee5da22130d434608566ffbc9bd28686a88942268934dcda0805fcc0ef76'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T09:58:16.221009', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6c2ff59c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11658.324901576, 'message_signature': '78f8faf97453666faf9c3dd03ba9f84b2edab4789154e17655dc48652c38ef2a'}]}, 'timestamp': '2025-12-02 09:58:16.221555', '_unique_id': '50a60e15af6b49d2b0f9f70a77306af0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging yield Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 04:58:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 09:58:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:16 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:16 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:16 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:58:16 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:16 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:16 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:16 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:17 localhost ceph-mon[298296]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:17 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:17 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:17 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:17 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:17 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:17 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:58:17 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:17 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:17 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:17 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.872 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.919 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.922 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.923 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.923 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:17 localhost nova_compute[281854]: 2025-12-02 09:58:17.947 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:18 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:58:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:58:18 localhost ceph-mon[298296]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:18 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:18 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:18 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:18 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:18 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:58:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:58:18 localhost systemd[1]: tmp-crun.ZMdQJ7.mount: Deactivated successfully. Dec 2 04:58:18 localhost podman[302787]: 2025-12-02 09:58:18.448182753 +0000 UTC m=+0.087668777 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 04:58:18 localhost podman[302787]: 2025-12-02 09:58:18.457914634 +0000 UTC m=+0.097400698 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:58:18 localhost podman[302788]: 2025-12-02 09:58:18.502739013 +0000 UTC m=+0.137397007 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 04:58:18 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:58:18 localhost podman[302788]: 2025-12-02 09:58:18.569078669 +0000 UTC m=+0.203736693 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 04:58:18 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:58:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:18 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:58:18 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:18 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:18 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:18 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:18 localhost nova_compute[281854]: 2025-12-02 09:58:18.983 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:19 localhost ceph-mon[298296]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:19 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:19 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:19 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:19 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:19 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.54159 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541914.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:19 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:19 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:20 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:20 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:20 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:20 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:20 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:20 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:20 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:20 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:20 localhost podman[302887]: Dec 2 04:58:20 localhost podman[302887]: 2025-12-02 09:58:20.935512111 +0000 UTC m=+0.081551263 container create 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, release=1763362218, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Dec 2 04:58:20 localhost systemd[1]: Started libpod-conmon-869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d.scope. Dec 2 04:58:21 localhost podman[302887]: 2025-12-02 09:58:20.903172486 +0000 UTC m=+0.049211718 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:21 localhost systemd[1]: Started libcrun container. Dec 2 04:58:21 localhost podman[302887]: 2025-12-02 09:58:21.032812015 +0000 UTC m=+0.178851167 container init 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:21 localhost podman[302887]: 2025-12-02 09:58:21.046227364 +0000 UTC m=+0.192266506 container start 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, GIT_CLEAN=True, RELEASE=main, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:21 localhost podman[302887]: 2025-12-02 09:58:21.046494871 +0000 UTC m=+0.192534053 container attach 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, ceph=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:21 localhost vigilant_germain[302902]: 167 167 Dec 2 04:58:21 localhost systemd[1]: libpod-869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d.scope: Deactivated successfully. Dec 2 04:58:21 localhost podman[302887]: 2025-12-02 09:58:21.050373265 +0000 UTC m=+0.196412437 container died 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1763362218, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:21 localhost podman[302907]: 2025-12-02 09:58:21.143341593 +0000 UTC m=+0.082849178 container remove 869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_germain, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218) Dec 2 04:58:21 localhost systemd[1]: libpod-conmon-869d45fb29f7c3bb109928203df54752dd1e0618b10dc28537452fdbdc934a5d.scope: Deactivated successfully. Dec 2 04:58:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:21 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:21 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 2 04:58:21 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:21 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:21 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:21 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:21 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:21 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:21 localhost ceph-mon[298296]: Deploying daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:21 localhost ceph-mon[298296]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:21 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:21 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:21 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:21 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:21 localhost podman[302977]: Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.848559243 +0000 UTC m=+0.072552112 container create 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 2 04:58:21 localhost systemd[1]: Started libpod-conmon-74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991.scope. Dec 2 04:58:21 localhost systemd[1]: Started libcrun container. Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.921824233 +0000 UTC m=+0.145817102 container init 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph) Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.822349632 +0000 UTC m=+0.046342471 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.932312384 +0000 UTC m=+0.156305253 container start 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, io.openshift.expose-services=, RELEASE=main) Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.932568901 +0000 UTC m=+0.156561820 container attach 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True) Dec 2 04:58:21 localhost zen_lederberg[302992]: 167 167 Dec 2 04:58:21 localhost systemd[1]: libpod-74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991.scope: Deactivated successfully. Dec 2 04:58:21 localhost podman[302977]: 2025-12-02 09:58:21.938994544 +0000 UTC m=+0.162987453 container died 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, ceph=True, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 2 04:58:21 localhost systemd[1]: var-lib-containers-storage-overlay-6205354ee967f315095454d8cdd88af2da233efb04fd9b8ee27fa535d7feac96-merged.mount: Deactivated successfully. Dec 2 04:58:22 localhost systemd[1]: var-lib-containers-storage-overlay-8e2362ccaa47a0e1a71729b7761a70346bd0162d86045abf1bc609ed0da6a617-merged.mount: Deactivated successfully. Dec 2 04:58:22 localhost podman[302997]: 2025-12-02 09:58:22.027322867 +0000 UTC m=+0.079234491 container remove 74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_lederberg, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, architecture=x86_64, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:22 localhost systemd[1]: libpod-conmon-74fc70ec71210004b22e826c911dc0546428ab86bf8aa6a7ec0f290ad96d1991.scope: Deactivated successfully. Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:22 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:22 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Dec 2 04:58:22 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.250219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502250271, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 956, "num_deletes": 251, "total_data_size": 1547847, "memory_usage": 1568032, "flush_reason": "Manual Compaction"} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:22 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502257649, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 885725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13484, "largest_seqno": 14435, "table_properties": {"data_size": 881034, "index_size": 2162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12440, "raw_average_key_size": 21, "raw_value_size": 870973, "raw_average_value_size": 1538, "num_data_blocks": 93, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669485, "oldest_key_time": 1764669485, "file_creation_time": 1764669502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7466 microseconds, and 2729 cpu microseconds. Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.257691) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 885725 bytes OK Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.257711) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.259478) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.259494) EVENT_LOG_v1 {"time_micros": 1764669502259489, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.259514) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1542608, prev total WAL file size 1542608, number of live WAL files 2. Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.260009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(864KB)], [18(17MB)] Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502260066, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 19563702, "oldest_snapshot_seqno": -1} Dec 2 04:58:22 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:22 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11439 keys, 16326909 bytes, temperature: kUnknown Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502386015, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16326909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16260316, "index_size": 36924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 307813, "raw_average_key_size": 26, "raw_value_size": 16063502, "raw_average_value_size": 1404, "num_data_blocks": 1401, "num_entries": 11439, "num_filter_entries": 11439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669502, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.386367) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16326909 bytes Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.388107) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 129.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 17.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(40.5) write-amplify(18.4) OK, records in: 11969, records dropped: 530 output_compression: NoCompression Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.388173) EVENT_LOG_v1 {"time_micros": 1764669502388156, "job": 8, "event": "compaction_finished", "compaction_time_micros": 126043, "compaction_time_cpu_micros": 51486, "output_level": 6, "num_output_files": 1, "total_output_size": 16326909, "num_input_records": 11969, "num_output_records": 11439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502388441, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502390866, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.259911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.390932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.390942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.390945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.390948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-09:58:22.390950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 04:58:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:22 localhost ceph-mon[298296]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:22 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:22 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:22 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:22 localhost ceph-mon[298296]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:22 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:22 localhost ceph-mon[298296]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:22 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:22 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:22 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:22 localhost podman[303074]: Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.842154511 +0000 UTC m=+0.081899883 container create edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64) Dec 2 04:58:22 localhost systemd[1]: Started libpod-conmon-edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565.scope. Dec 2 04:58:22 localhost systemd[1]: Started libcrun container. Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.808258764 +0000 UTC m=+0.048004196 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.919685676 +0000 UTC m=+0.159431038 container init edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:22 localhost nova_compute[281854]: 2025-12-02 09:58:22.957 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.965092331 +0000 UTC m=+0.204837703 container start edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, version=7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.96543639 +0000 UTC m=+0.205181792 container attach edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, architecture=x86_64, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, name=rhceph) Dec 2 04:58:22 localhost peaceful_maxwell[303090]: 167 167 Dec 2 04:58:22 localhost systemd[1]: libpod-edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565.scope: Deactivated successfully. Dec 2 04:58:22 localhost podman[303074]: 2025-12-02 09:58:22.969418046 +0000 UTC m=+0.209163428 container died edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True) Dec 2 04:58:23 localhost systemd[1]: var-lib-containers-storage-overlay-a81db17e6649e0d98962f92509d22b46b8fe0d5e24d3aa58688c1a1819f3fafa-merged.mount: Deactivated successfully. Dec 2 04:58:23 localhost podman[303095]: 2025-12-02 09:58:23.073954154 +0000 UTC m=+0.093002840 container remove edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_maxwell, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 2 04:58:23 localhost systemd[1]: libpod-conmon-edb0264906137dad1b75cc707ab22d27fecb5937c320d69346a2b6b7a83c7565.scope: Deactivated successfully. Dec 2 04:58:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:23 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:23 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:58:23 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:23 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:23 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:23 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:23 localhost podman[303174]: Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.839654983 +0000 UTC m=+0.067981160 container create 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:23 localhost systemd[1]: Started libpod-conmon-68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3.scope. Dec 2 04:58:23 localhost systemd[1]: Started libcrun container. Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.902401341 +0000 UTC m=+0.130727498 container init 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.806588517 +0000 UTC m=+0.034914724 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.908857525 +0000 UTC m=+0.137183682 container start 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.911720241 +0000 UTC m=+0.140046438 container attach 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, distribution-scope=public, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:23 localhost affectionate_aryabhata[303190]: 167 167 Dec 2 04:58:23 localhost systemd[1]: libpod-68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3.scope: Deactivated successfully. Dec 2 04:58:23 localhost podman[303174]: 2025-12-02 09:58:23.914265669 +0000 UTC m=+0.142591856 container died 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Dec 2 04:58:23 localhost systemd[1]: var-lib-containers-storage-overlay-8fa32706b9d449e9bc607c4dfeb310e341a6de88b90949854234d8eebe9cf971-merged.mount: Deactivated successfully. Dec 2 04:58:24 localhost nova_compute[281854]: 2025-12-02 09:58:24.041 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:24 localhost podman[303195]: 2025-12-02 09:58:24.044912046 +0000 UTC m=+0.119455198 container remove 68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_aryabhata, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main) Dec 2 04:58:24 localhost systemd[1]: libpod-conmon-68b30b4e39b49bba39e737696f46b8b0f63d6406e9acb0017b44fcb26cec85a3.scope: Deactivated successfully. Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:24 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:24 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:24 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:58:24 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:24 localhost podman[303264]: Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.660288442 +0000 UTC m=+0.053767710 container create c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 04:58:24 localhost systemd[1]: Started libpod-conmon-c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88.scope. Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:58:24 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:24 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (2) No such file or directory Dec 2 04:58:24 localhost systemd[1]: Started libcrun container. Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.724703425 +0000 UTC m=+0.118182683 container init c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.73126297 +0000 UTC m=+0.124742238 container start c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, architecture=x86_64) Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.73164297 +0000 UTC m=+0.125122228 container attach c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:24 localhost musing_khayyam[303280]: 167 167 Dec 2 04:58:24 localhost systemd[1]: libpod-c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88.scope: Deactivated successfully. Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.734459466 +0000 UTC m=+0.127938724 container died c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, release=1763362218, maintainer=Guillaume Abrioux , architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:58:24 localhost podman[303264]: 2025-12-02 09:58:24.643279317 +0000 UTC m=+0.036758645 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:24 localhost podman[303285]: 2025-12-02 09:58:24.816162403 +0000 UTC m=+0.072610704 container remove c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_khayyam, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 2 04:58:24 localhost systemd[1]: libpod-conmon-c4c0f5b875fb591be114a63f04a4a357c9053eecc1b5a62bcd9a6bc7e3ca4a88.scope: Deactivated successfully. Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:24 localhost systemd[1]: var-lib-containers-storage-overlay-6f39e0b73471d4bf964bb7a0a6c573f1334a946fb2daea93a513a94e450e9745-merged.mount: Deactivated successfully. Dec 2 04:58:25 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:25 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:25 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:25 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:25 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:25 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:25 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (2) No such file or directory Dec 2 04:58:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:26 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 2 04:58:26 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:26 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:26 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (2) No such file or directory Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 2 04:58:27 localhost ceph-mgr[288059]: [progress INFO root] update: starting ev a93bf519-2b05-4bf0-ace4-a29a71bb2529 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:58:27 localhost ceph-mgr[288059]: [progress INFO root] complete: finished ev a93bf519-2b05-4bf0-ace4-a29a71bb2529 (Updating node-proxy deployment (+3 -> 3)) Dec 2 04:58:27 localhost ceph-mgr[288059]: [progress INFO root] Completed event a93bf519-2b05-4bf0-ace4-a29a71bb2529 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:27 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:27 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(probing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(cluster) log [INF] : mon.np0005541913 calling monitor election Dec 2 04:58:27 localhost ceph-mon[298296]: paxos.1).electionLogic(64) init, last seen epoch 64 Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:27 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:27 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:27 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:27 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:27 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:27 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:27 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:28 localhost nova_compute[281854]: 2025-12-02 09:58:27.998 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:28 localhost ceph-mgr[288059]: [progress INFO root] Writing back 50 completed events Dec 2 04:58:28 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 2 04:58:28 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:28 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:28 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:28 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:28 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:29 localhost nova_compute[281854]: 2025-12-02 09:58:29.091 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:29 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:29 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:29 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:29 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:30 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:30 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:30 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:30 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:30 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:58:31 localhost podman[303388]: 2025-12-02 09:58:31.420701302 +0000 UTC m=+0.066122590 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:58:31 localhost podman[303388]: 2025-12-02 09:58:31.461069292 +0000 UTC m=+0.106490570 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Dec 2 04:58:31 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:58:31 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:31 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:31 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:31 localhost ceph-mgr[288059]: mgr finish mon failed to return metadata for mon.np0005541914: (22) Invalid argument Dec 2 04:58:32 localhost ceph-mon[298296]: paxos.1).electionLogic(65) init, last seen epoch 65, mid-election, bumping Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913@1(electing) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:32 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:32 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:32 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:32 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541912 calling monitor election Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913 calling monitor election Dec 2 04:58:32 localhost ceph-mon[298296]: Reconfiguring crash.np0005541912 (monmap changed)... Dec 2 04:58:32 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541914 calling monitor election Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541912 is new leader, mons np0005541912,np0005541913,np0005541914 in quorum (ranks 0,1,2) Dec 2 04:58:32 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:32 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 04:58:32 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:32 localhost ceph-mgr[288059]: mgr.server handle_open ignoring open from mon.np0005541914 172.18.0.108:0/645205908; not ready for session (expect reconnect) Dec 2 04:58:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0) Dec 2 04:58:32 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch Dec 2 04:58:33 localhost nova_compute[281854]: 2025-12-02 09:58:33.028 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:33 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain Dec 2 04:58:33 localhost ceph-mgr[288059]: mgr.server handle_report got status from non-daemon mon.np0005541914 Dec 2 04:58:33 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:33.711+0000 7f783c290640 -1 mgr.server handle_report got status from non-daemon mon.np0005541914 Dec 2 04:58:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:33 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:33 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Dec 2 04:58:33 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:33 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:33 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:33 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:58:33 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1722694794' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:58:34 localhost openstack_network_exporter[242845]: ERROR 09:58:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:34 localhost openstack_network_exporter[242845]: ERROR 09:58:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:58:34 localhost openstack_network_exporter[242845]: ERROR 09:58:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:58:34 localhost openstack_network_exporter[242845]: ERROR 09:58:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:58:34 localhost openstack_network_exporter[242845]: Dec 2 04:58:34 localhost openstack_network_exporter[242845]: ERROR 09:58:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:58:34 localhost openstack_network_exporter[242845]: Dec 2 04:58:34 localhost nova_compute[281854]: 2025-12-02 09:58:34.100 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:34 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:34 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:34 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:34 localhost ceph-mon[298296]: Reconfiguring osd.2 (monmap changed)... Dec 2 04:58:34 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:34 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:34 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:35 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:35 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Dec 2 04:58:35 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:35 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:35 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:35 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mon[298296]: Reconfiguring osd.5 (monmap changed)... Dec 2 04:58:36 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:36 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:36 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:36 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:36 localhost podman[240799]: time="2025-12-02T09:58:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:58:36 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:36 localhost podman[240799]: @ - - [02/Dec/2025:09:58:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:58:36 localhost podman[240799]: @ - - [02/Dec/2025:09:58:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18718 "" "Go-http-client/1.1" Dec 2 04:58:36 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:36 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch Dec 2 04:58:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:36 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:36 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:36 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)... Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:58:37 localhost podman[303408]: 2025-12-02 09:58:37.443993558 +0000 UTC m=+0.082782056 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 04:58:37 localhost podman[303408]: 2025-12-02 09:58:37.449913466 +0000 UTC m=+0.088702004 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 2 04:58:37 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:58:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:37 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:37 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 2 04:58:37 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:37 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:37 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:37 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:38 localhost ceph-mgr[288059]: log_channel(audit) log [DBG] : from='client.64100 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Dec 2 04:58:38 localhost nova_compute[281854]: 2025-12-02 09:58:38.077 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:38 localhost ceph-mgr[288059]: [cephadm INFO root] Reconfig service osd.default_drive_group Dec 2 04:58:38 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)... Dec 2 04:58:38 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:38 localhost podman[303479]: Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.53565333 +0000 UTC m=+0.072116101 container create 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 2 04:58:38 localhost systemd[1]: Started libpod-conmon-9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82.scope. Dec 2 04:58:38 localhost systemd[1]: Started libcrun container. Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.499955524 +0000 UTC m=+0.036418375 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.606786993 +0000 UTC m=+0.143249764 container init 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.61711138 +0000 UTC m=+0.153574131 container start 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7) Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.617593472 +0000 UTC m=+0.154056243 container attach 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main) Dec 2 04:58:38 localhost frosty_wu[303494]: 167 167 Dec 2 04:58:38 localhost systemd[1]: libpod-9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82.scope: Deactivated successfully. Dec 2 04:58:38 localhost podman[303479]: 2025-12-02 09:58:38.622109774 +0000 UTC m=+0.158572545 container died 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 2 04:58:38 localhost ceph-mgr[288059]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail Dec 2 04:58:38 localhost podman[303499]: 2025-12-02 09:58:38.709222854 +0000 UTC m=+0.081386268 container remove 9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_wu, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, RELEASE=main, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=) Dec 2 04:58:38 localhost systemd[1]: libpod-conmon-9dd2786e0e70226825c3c2a0970a3d7ae2fb2853670c4582a697173dd015ce82.scope: Deactivated successfully. Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:38 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:38 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 2 04:58:38 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 2 04:58:38 localhost ceph-mgr[288059]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:38 localhost ceph-mgr[288059]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:39 localhost nova_compute[281854]: 2025-12-02 09:58:39.137 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:58:39 localhost ceph-mon[298296]: Reconfiguring crash.np0005541913 (monmap changed)... Dec 2 04:58:39 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain Dec 2 04:58:39 localhost ceph-mon[298296]: Reconfig service osd.default_drive_group Dec 2 04:58:39 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:39 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:39 localhost ceph-mon[298296]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:39 localhost podman[303565]: 2025-12-02 09:58:39.460436256 +0000 UTC m=+0.098140867 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 04:58:39 localhost podman[303565]: 2025-12-02 09:58:39.481090569 +0000 UTC m=+0.118795150 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, managed_by=edpm_ansible) Dec 2 04:58:39 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:58:39 localhost podman[303573]: Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.511334338 +0000 UTC m=+0.126242159 container create a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:39 localhost systemd[1]: Started libpod-conmon-a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e.scope. Dec 2 04:58:39 localhost systemd[1]: var-lib-containers-storage-overlay-eaee7d597b472a45c4e8a1b44b0f12246d330ebb129c6cd5dd8357cd72d3c3a8-merged.mount: Deactivated successfully. Dec 2 04:58:39 localhost systemd[1]: Started libcrun container. Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.477202815 +0000 UTC m=+0.092110656 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.583300224 +0000 UTC m=+0.198208025 container init a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, ceph=True, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7) Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.593563079 +0000 UTC m=+0.208470880 container start a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.594716919 +0000 UTC m=+0.209624790 container attach a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Dec 2 04:58:39 localhost zen_stonebraker[303601]: 167 167 Dec 2 04:58:39 localhost systemd[1]: libpod-a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e.scope: Deactivated successfully. Dec 2 04:58:39 localhost podman[303573]: 2025-12-02 09:58:39.617265193 +0000 UTC m=+0.232172994 container died a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public) Dec 2 04:58:39 localhost podman[303606]: 2025-12-02 09:58:39.71020684 +0000 UTC m=+0.081521442 container remove a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zen_stonebraker, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:39 localhost systemd[1]: libpod-conmon-a3a86c175695f9f10a232a73c4d6576ddcce9249b6e81b67295a3e00112da92e.scope: Deactivated successfully. Dec 2 04:58:39 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0) Dec 2 04:58:39 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0) Dec 2 04:58:39 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 e90: 6 total, 6 up, 6 in Dec 2 04:58:40 localhost systemd[1]: session-69.scope: Deactivated successfully. Dec 2 04:58:40 localhost systemd[1]: session-69.scope: Consumed 21.232s CPU time. Dec 2 04:58:40 localhost systemd-logind[757]: Session 69 logged out. Waiting for processes to exit. Dec 2 04:58:40 localhost systemd-logind[757]: Removed session 69. Dec 2 04:58:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: ignoring --setuser ceph since I am not root Dec 2 04:58:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: ignoring --setgroup ceph since I am not root Dec 2 04:58:40 localhost ceph-mgr[288059]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 2 04:58:40 localhost ceph-mgr[288059]: pidfile_write: ignore empty --pid-file Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Loading python module 'alerts' Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Loading python module 'balancer' Dec 2 04:58:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:40.144+0000 7f2d05246140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Loading python module 'cephadm' Dec 2 04:58:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:40.212+0000 7f2d05246140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 2 04:58:40 localhost sshd[303653]: main: sshd: ssh-rsa algorithm is disabled Dec 2 04:58:40 localhost systemd-logind[757]: New session 71 of user ceph-admin. Dec 2 04:58:40 localhost systemd[1]: Started Session 71 of User ceph-admin. Dec 2 04:58:40 localhost ceph-mon[298296]: Reconfiguring osd.0 (monmap changed)... Dec 2 04:58:40 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:40 localhost ceph-mon[298296]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 04:58:40 localhost ceph-mon[298296]: Activating manager daemon np0005541912.qwddia Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:40 localhost ceph-mon[298296]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' Dec 2 04:58:40 localhost ceph-mon[298296]: Manager daemon np0005541912.qwddia is now available Dec 2 04:58:40 localhost ceph-mon[298296]: removing stray HostCache host record np0005541911.localdomain.devices.0 Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/mirror_snapshot_schedule"} : dispatch Dec 2 04:58:40 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/trash_purge_schedule"} : dispatch Dec 2 04:58:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:58:40 localhost systemd[1]: var-lib-containers-storage-overlay-359bd3fde74013314e3669a5a3d0da96f652ca71e67c6b8dad8904cb260b40e3-merged.mount: Deactivated successfully. Dec 2 04:58:40 localhost podman[303675]: 2025-12-02 09:58:40.609829942 +0000 UTC m=+0.094514380 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:58:40 localhost podman[303675]: 2025-12-02 09:58:40.641040808 +0000 UTC m=+0.125725206 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 04:58:40 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:58:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Loading python module 'crash' Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:58:40 localhost ceph-mgr[288059]: mgr[py] Loading python module 'dashboard' Dec 2 04:58:40 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:40.867+0000 7f2d05246140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost podman[303793]: 2025-12-02 09:58:41.347393918 +0000 UTC m=+0.086668690 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'devicehealth' Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'diskprediction_local' Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:41.441+0000 7f2d05246140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost podman[303793]: 2025-12-02 09:58:41.455908101 +0000 UTC m=+0.195182883 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z) Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: from numpy import show_config as show_numpy_config Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'influx' Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:41.591+0000 7f2d05246140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'insights' Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:41.649+0000 7f2d05246140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'iostat' Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:58:41 localhost ceph-mgr[288059]: mgr[py] Loading python module 'k8sevents' Dec 2 04:58:41 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:41.765+0000 7f2d05246140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'localpool' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'mds_autoscaler' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'mirroring' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'nfs' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'orchestrator' Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.520+0000 7f2d05246140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'osd_perf_query' Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.672+0000 7f2d05246140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.739+0000 7f2d05246140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'osd_support' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.797+0000 7f2d05246140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'pg_autoscaler' Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'progress' Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.868+0000 7f2d05246140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:42.935+0000 7f2d05246140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 2 04:58:42 localhost ceph-mgr[288059]: mgr[py] Loading python module 'prometheus' Dec 2 04:58:43 localhost ceph-mon[298296]: [02/Dec/2025:09:58:41] ENGINE Bus STARTING Dec 2 04:58:43 localhost ceph-mon[298296]: [02/Dec/2025:09:58:41] ENGINE Serving on http://172.18.0.106:8765 Dec 2 04:58:43 localhost ceph-mon[298296]: [02/Dec/2025:09:58:41] ENGINE Serving on https://172.18.0.106:7150 Dec 2 04:58:43 localhost ceph-mon[298296]: [02/Dec/2025:09:58:41] ENGINE Bus STARTED Dec 2 04:58:43 localhost ceph-mon[298296]: [02/Dec/2025:09:58:41] ENGINE Client ('172.18.0.106', 43976) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:43 localhost nova_compute[281854]: 2025-12-02 09:58:43.081 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rbd_support' Dec 2 04:58:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:43.254+0000 7f2d05246140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Loading python module 'restful' Dec 2 04:58:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:43.339+0000 7f2d05246140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rgw' Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost ceph-mgr[288059]: mgr[py] Loading python module 'rook' Dec 2 04:58:43 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:43.684+0000 7f2d05246140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 2 04:58:43 localhost nova_compute[281854]: 2025-12-02 09:58:43.879 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'selftest' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.129+0000 7f2d05246140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost nova_compute[281854]: 2025-12-02 09:58:44.139 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'snap_schedule' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.192+0000 7f2d05246140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'stats' Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'status' Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'telegraf' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.398+0000 7f2d05246140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'telemetry' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.462+0000 7f2d05246140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.614+0000 7f2d05246140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'test_orchestrator' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 04:58:44 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 04:58:44 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:44 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:44 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:44 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'volumes' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.773+0000 7f2d05246140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:58:44 localhost ceph-mgr[288059]: mgr[py] Loading python module 'zabbix' Dec 2 04:58:44 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:44.974+0000 7f2d05246140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 2 04:58:45 localhost ceph-mgr[288059]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:58:45 localhost ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541913-mfesdm[288055]: 2025-12-02T09:58:45.041+0000 7f2d05246140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 2 04:58:45 localhost ceph-mgr[288059]: ms_deliver_dispatch: unhandled message 0x563adf5ab1e0 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Dec 2 04:58:45 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.106:6810/2383186409 Dec 2 04:58:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:45 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 04:58:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:58:46 localhost podman[304552]: 2025-12-02 09:58:46.047138397 +0000 UTC m=+0.085980541 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:58:46 localhost podman[304552]: 2025-12-02 09:58:46.056498728 +0000 UTC m=+0.095340902 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 04:58:46 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:58:46 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:46 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 2 04:58:47 localhost nova_compute[281854]: 2025-12-02 09:58:47.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:47 localhost nova_compute[281854]: 2025-12-02 09:58:47.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:58:47 localhost ceph-mon[298296]: Reconfiguring daemon osd.2 on np0005541912.localdomain Dec 2 04:58:47 localhost ceph-mon[298296]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 2 04:58:47 localhost ceph-mon[298296]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 2 04:58:47 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:47 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 2 04:58:48 localhost nova_compute[281854]: 2025-12-02 09:58:48.119 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:58:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:58:48 localhost podman[304732]: 2025-12-02 09:58:48.801774337 +0000 UTC m=+0.090859092 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:58:48 localhost nova_compute[281854]: 2025-12-02 09:58:48.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:48 localhost nova_compute[281854]: 2025-12-02 09:58:48.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:58:48 localhost nova_compute[281854]: 2025-12-02 09:58:48.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:58:48 localhost podman[304732]: 2025-12-02 09:58:48.853053189 +0000 UTC m=+0.142138004 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:58:48 localhost podman[304731]: 2025-12-02 09:58:48.862308257 +0000 UTC m=+0.155607505 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:58:48 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:58:48 localhost podman[304731]: 2025-12-02 09:58:48.897299683 +0000 UTC m=+0.190598941 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 04:58:48 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:58:48 localhost ceph-mon[298296]: Reconfiguring daemon osd.5 on np0005541912.localdomain Dec 2 04:58:48 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:48 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 2 04:58:49 localhost nova_compute[281854]: 2025-12-02 09:58:49.144 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:49 localhost podman[304811]: Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.255901659 +0000 UTC m=+0.085920310 container create 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, architecture=x86_64, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:49 localhost systemd[1]: Started libpod-conmon-82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d.scope. Dec 2 04:58:49 localhost systemd[1]: Started libcrun container. Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.331823141 +0000 UTC m=+0.161841822 container init 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.232234475 +0000 UTC m=+0.062253156 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.34522762 +0000 UTC m=+0.175246291 container start 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.345516557 +0000 UTC m=+0.175535208 container attach 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, release=1763362218, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:58:49 localhost bold_kare[304827]: 167 167 Dec 2 04:58:49 localhost podman[304811]: 2025-12-02 09:58:49.34976496 +0000 UTC m=+0.179783661 container died 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph) Dec 2 04:58:49 localhost systemd[1]: libpod-82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d.scope: Deactivated successfully. Dec 2 04:58:49 localhost podman[304832]: 2025-12-02 09:58:49.445860122 +0000 UTC m=+0.084970735 container remove 82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_kare, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Dec 2 04:58:49 localhost systemd[1]: libpod-conmon-82bdf7eb7c8f4955a3639abfb870a5c57c2d8af7c36367832891f6860620874d.scope: Deactivated successfully. Dec 2 04:58:49 localhost nova_compute[281854]: 2025-12-02 09:58:49.715 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:58:49 localhost nova_compute[281854]: 2025-12-02 09:58:49.716 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:58:49 localhost nova_compute[281854]: 2025-12-02 09:58:49.717 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:58:49 localhost nova_compute[281854]: 2025-12-02 09:58:49.717 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:58:49 localhost systemd[1]: var-lib-containers-storage-overlay-a08429a249406af603dcdaecdd29fef51c19efe8d0cdad6822fb29e4836e89fe-merged.mount: Deactivated successfully. Dec 2 04:58:49 localhost ceph-mon[298296]: Reconfiguring daemon osd.0 on np0005541913.localdomain Dec 2 04:58:49 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:49 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 2 04:58:50 localhost podman[304909]: Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.481500924 +0000 UTC m=+0.063942102 container create 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, ceph=True, RELEASE=main, io.openshift.expose-services=) Dec 2 04:58:50 localhost systemd[1]: Started libpod-conmon-3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500.scope. Dec 2 04:58:50 localhost systemd[1]: Started libcrun container. Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.547424319 +0000 UTC m=+0.129865457 container init 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.450843744 +0000 UTC m=+0.033284952 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.557790856 +0000 UTC m=+0.140232034 container start 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True) Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.55831496 +0000 UTC m=+0.140756198 container attach 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, version=7) Dec 2 04:58:50 localhost systemd[1]: tmp-crun.n4p3Tq.mount: Deactivated successfully. Dec 2 04:58:50 localhost objective_raman[304924]: 167 167 Dec 2 04:58:50 localhost systemd[1]: libpod-3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500.scope: Deactivated successfully. Dec 2 04:58:50 localhost podman[304909]: 2025-12-02 09:58:50.562022599 +0000 UTC m=+0.144463777 container died 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4) Dec 2 04:58:50 localhost podman[304929]: 2025-12-02 09:58:50.656574379 +0000 UTC m=+0.084647916 container remove 3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_raman, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 2 04:58:50 localhost systemd[1]: libpod-conmon-3059120fd8d1cb4a27b0e155b51ecbc33e90069a7383c8d5bf331e53dd0e4500.scope: Deactivated successfully. Dec 2 04:58:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:50 localhost systemd[1]: var-lib-containers-storage-overlay-ac97adbaeaa07c45666865af2f09771ba03cfb59d8c9d7b93b5393a12e595687-merged.mount: Deactivated successfully. Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.813 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.985 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.987 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.988 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.988 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:50 localhost nova_compute[281854]: 2025-12-02 09:58:50.989 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.060 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.061 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.062 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.062 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.063 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:58:51 localhost ceph-mon[298296]: Reconfiguring osd.3 (monmap changed)... Dec 2 04:58:51 localhost ceph-mon[298296]: Reconfiguring daemon osd.3 on np0005541913.localdomain Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:51 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:58:51 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/481852052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.510 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:58:51 localhost podman[305026]: Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.548233039 +0000 UTC m=+0.072060220 container create 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 04:58:51 localhost systemd[1]: Started libpod-conmon-7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595.scope. Dec 2 04:58:51 localhost systemd[1]: Started libcrun container. Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.609743025 +0000 UTC m=+0.133570226 container init 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph) Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.513044328 +0000 UTC m=+0.036871609 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.620964075 +0000 UTC m=+0.144791286 container start 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container) Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.621244072 +0000 UTC m=+0.145071273 container attach 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Dec 2 04:58:51 localhost charming_golick[305044]: 167 167 Dec 2 04:58:51 localhost systemd[1]: libpod-7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595.scope: Deactivated successfully. Dec 2 04:58:51 localhost podman[305026]: 2025-12-02 09:58:51.624854579 +0000 UTC m=+0.148681800 container died 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container) Dec 2 04:58:51 localhost podman[305049]: 2025-12-02 09:58:51.721909646 +0000 UTC m=+0.085378975 container remove 7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=charming_golick, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, ceph=True) Dec 2 04:58:51 localhost systemd[1]: libpod-conmon-7f0ab745fe4f63d6d07ecbcdbd98db4d8c61bf630adbe4b627aa3675662d3595.scope: Deactivated successfully. Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.773 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:58:51 localhost nova_compute[281854]: 2025-12-02 09:58:51.774 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:58:51 localhost systemd[1]: var-lib-containers-storage-overlay-8fd105a16b88c1101e745e78ce212ec91e56d84cd378b4e67ee076944015e6d0-merged.mount: Deactivated successfully. Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.024 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.028 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11645MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.028 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.029 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:58:52 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)... Dec 2 04:58:52 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain Dec 2 04:58:52 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:52 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:52 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.432 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.432 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.433 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:58:52 localhost podman[305119]: Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.468 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.478510381 +0000 UTC m=+0.065790901 container create fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Dec 2 04:58:52 localhost systemd[1]: Started libpod-conmon-fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce.scope. Dec 2 04:58:52 localhost systemd[1]: Started libcrun container. Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.537238983 +0000 UTC m=+0.124519493 container init fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, RELEASE=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, com.redhat.component=rhceph-container) Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.444217324 +0000 UTC m=+0.031497914 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.545867754 +0000 UTC m=+0.133148264 container start fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, io.buildah.version=1.41.4, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph) Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.545972827 +0000 UTC m=+0.133253337 container attach fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 04:58:52 localhost crazy_spence[305134]: 167 167 Dec 2 04:58:52 localhost systemd[1]: libpod-fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce.scope: Deactivated successfully. Dec 2 04:58:52 localhost podman[305119]: 2025-12-02 09:58:52.549837501 +0000 UTC m=+0.137118061 container died fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main) Dec 2 04:58:52 localhost podman[305139]: 2025-12-02 09:58:52.615197189 +0000 UTC m=+0.059941284 container remove fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_spence, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 04:58:52 localhost systemd[1]: libpod-conmon-fd3e0dd49df6381d5475dccb563881dc12cf94581bfbd88e1a6ce5cea5a4b8ce.scope: Deactivated successfully. Dec 2 04:58:52 localhost systemd[1]: var-lib-containers-storage-overlay-2e9e9dea0bcca0663144b7219617296296099a694380acd88c4f301816f08291-merged.mount: Deactivated successfully. Dec 2 04:58:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:58:52 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1246894869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.898 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:58:52 localhost nova_compute[281854]: 2025-12-02 09:58:52.906 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:58:53 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)... Dec 2 04:58:53 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain Dec 2 04:58:53 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:53 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:53 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 2 04:58:53 localhost nova_compute[281854]: 2025-12-02 09:58:53.146 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:53 localhost nova_compute[281854]: 2025-12-02 09:58:53.307 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:58:53 localhost nova_compute[281854]: 2025-12-02 09:58:53.309 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:58:53 localhost nova_compute[281854]: 2025-12-02 09:58:53.310 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:58:54 localhost nova_compute[281854]: 2025-12-02 09:58:54.144 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:54 localhost nova_compute[281854]: 2025-12-02 09:58:54.148 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:54 localhost nova_compute[281854]: 2025-12-02 09:58:54.149 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:54 localhost nova_compute[281854]: 2025-12-02 09:58:54.150 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:58:54 localhost ceph-mon[298296]: Reconfiguring crash.np0005541914 (monmap changed)... Dec 2 04:58:54 localhost ceph-mon[298296]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain Dec 2 04:58:54 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:54 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:54 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 2 04:58:54 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[298296]: Reconfiguring osd.1 (monmap changed)... Dec 2 04:58:55 localhost ceph-mon[298296]: Reconfiguring daemon osd.1 on np0005541914.localdomain Dec 2 04:58:55 localhost ceph-mon[298296]: Saving service mon spec with placement label:mon Dec 2 04:58:55 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:55 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 2 04:58:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:58:56 localhost ceph-mon[298296]: Reconfiguring osd.4 (monmap changed)... Dec 2 04:58:56 localhost ceph-mon[298296]: Reconfiguring daemon osd.4 on np0005541914.localdomain Dec 2 04:58:56 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:56 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 2 04:58:57 localhost ceph-mon[298296]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)... Dec 2 04:58:57 localhost ceph-mon[298296]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain Dec 2 04:58:57 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:57 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:57 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 2 04:58:58 localhost nova_compute[281854]: 2025-12-02 09:58:58.184 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:58 localhost ceph-mon[298296]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)... Dec 2 04:58:58 localhost ceph-mon[298296]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain Dec 2 04:58:58 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:58 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:58 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:58:59 localhost nova_compute[281854]: 2025-12-02 09:58:59.147 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:58:59 localhost ceph-mon[298296]: Reconfiguring mon.np0005541914 (monmap changed)... Dec 2 04:58:59 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:58:59 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:59:00 localhost podman[305245]: Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.217920647 +0000 UTC m=+0.072220994 container create d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Dec 2 04:59:00 localhost systemd[1]: Started libpod-conmon-d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e.scope. Dec 2 04:59:00 localhost systemd[1]: Started libcrun container. Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.185568601 +0000 UTC m=+0.039869008 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.285930927 +0000 UTC m=+0.140231274 container init d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64) Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.298145433 +0000 UTC m=+0.152445760 container start d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1763362218, vcs-type=git, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.29836741 +0000 UTC m=+0.152667797 container attach d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4) Dec 2 04:59:00 localhost practical_curran[305261]: 167 167 Dec 2 04:59:00 localhost systemd[1]: libpod-d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e.scope: Deactivated successfully. Dec 2 04:59:00 localhost podman[305245]: 2025-12-02 09:59:00.302318365 +0000 UTC m=+0.156618742 container died d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main) Dec 2 04:59:00 localhost podman[305266]: 2025-12-02 09:59:00.407749377 +0000 UTC m=+0.091341846 container remove d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_curran, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7) Dec 2 04:59:00 localhost systemd[1]: libpod-conmon-d20cd8e7b21d40833ca3f831eaa8bdb5120a13999df4e5f1b12d7edb07be8b7e.scope: Deactivated successfully. Dec 2 04:59:00 localhost ceph-mon[298296]: Reconfiguring mon.np0005541912 (monmap changed)... Dec 2 04:59:00 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain Dec 2 04:59:00 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:00 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:00 localhost ceph-mon[298296]: Reconfiguring mon.np0005541913 (monmap changed)... Dec 2 04:59:00 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 2 04:59:00 localhost ceph-mon[298296]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain Dec 2 04:59:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:01 localhost systemd[1]: var-lib-containers-storage-overlay-b9f949ad2f04a27d9de6982423a72e234a2336dade60524293046a2d4dbe23ad-merged.mount: Deactivated successfully. Dec 2 04:59:01 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:01 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:01 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 04:59:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:59:02 localhost systemd[1]: tmp-crun.zpADST.mount: Deactivated successfully. Dec 2 04:59:02 localhost podman[305282]: 2025-12-02 09:59:02.461602424 +0000 UTC m=+0.099393571 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 2 04:59:02 localhost podman[305282]: 2025-12-02 09:59:02.478037904 +0000 UTC m=+0.115829041 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 04:59:02 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:59:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:59:03.043 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:59:03.044 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:03 localhost ovn_metadata_agent[160216]: 2025-12-02 09:59:03.045 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:03 localhost nova_compute[281854]: 2025-12-02 09:59:03.189 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:04 localhost openstack_network_exporter[242845]: ERROR 09:59:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:59:04 localhost openstack_network_exporter[242845]: ERROR 09:59:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:04 localhost openstack_network_exporter[242845]: ERROR 09:59:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:04 localhost openstack_network_exporter[242845]: ERROR 09:59:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:59:04 localhost openstack_network_exporter[242845]: Dec 2 04:59:04 localhost openstack_network_exporter[242845]: ERROR 09:59:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:59:04 localhost openstack_network_exporter[242845]: Dec 2 04:59:04 localhost nova_compute[281854]: 2025-12-02 09:59:04.150 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:06 localhost podman[240799]: time="2025-12-02T09:59:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:59:06 localhost podman[240799]: @ - - [02/Dec/2025:09:59:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:59:06 localhost podman[240799]: @ - - [02/Dec/2025:09:59:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1" Dec 2 04:59:08 localhost nova_compute[281854]: 2025-12-02 09:59:08.243 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:59:08 localhost podman[305301]: 2025-12-02 09:59:08.457923068 +0000 UTC m=+0.089187978 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 2 04:59:08 localhost podman[305301]: 2025-12-02 09:59:08.492217716 +0000 UTC m=+0.123482596 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:59:08 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:59:09 localhost nova_compute[281854]: 2025-12-02 09:59:09.153 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:59:10 localhost systemd[1]: tmp-crun.tcT7zL.mount: Deactivated successfully. Dec 2 04:59:10 localhost podman[305319]: 2025-12-02 09:59:10.439489433 +0000 UTC m=+0.082136010 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public) Dec 2 04:59:10 localhost podman[305319]: 2025-12-02 09:59:10.477333405 +0000 UTC m=+0.119979972 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 04:59:10 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:59:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:59:11 localhost systemd[1]: tmp-crun.olU9q9.mount: Deactivated successfully. Dec 2 04:59:11 localhost podman[305338]: 2025-12-02 09:59:11.450762802 +0000 UTC m=+0.090186555 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 04:59:11 localhost podman[305338]: 2025-12-02 09:59:11.458866318 +0000 UTC m=+0.098290141 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 04:59:11 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:59:13 localhost nova_compute[281854]: 2025-12-02 09:59:13.246 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:14 localhost nova_compute[281854]: 2025-12-02 09:59:14.173 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:16 localhost systemd[299560]: Starting Mark boot as successful... Dec 2 04:59:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:59:16 localhost systemd[299560]: Finished Mark boot as successful. Dec 2 04:59:16 localhost systemd[1]: tmp-crun.rTwnU7.mount: Deactivated successfully. Dec 2 04:59:16 localhost podman[305361]: 2025-12-02 09:59:16.430797658 +0000 UTC m=+0.069779998 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3) Dec 2 04:59:16 localhost podman[305361]: 2025-12-02 09:59:16.445807061 +0000 UTC m=+0.084789411 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 04:59:16 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:59:18 localhost nova_compute[281854]: 2025-12-02 09:59:18.286 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:19 localhost nova_compute[281854]: 2025-12-02 09:59:19.176 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:59:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:59:19 localhost podman[305382]: 2025-12-02 09:59:19.501478666 +0000 UTC m=+0.137149931 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 04:59:19 localhost podman[305381]: 2025-12-02 09:59:19.477283489 +0000 UTC m=+0.116260692 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:59:19 localhost podman[305382]: 2025-12-02 09:59:19.543089149 +0000 UTC m=+0.178760404 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 04:59:19 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:59:19 localhost podman[305381]: 2025-12-02 09:59:19.561163414 +0000 UTC m=+0.200140597 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:59:19 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:59:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:23 localhost nova_compute[281854]: 2025-12-02 09:59:23.288 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:24 localhost nova_compute[281854]: 2025-12-02 09:59:24.179 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:28 localhost nova_compute[281854]: 2025-12-02 09:59:28.354 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:29 localhost nova_compute[281854]: 2025-12-02 09:59:29.182 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 04:59:33 localhost nova_compute[281854]: 2025-12-02 09:59:33.358 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:33 localhost systemd[1]: tmp-crun.XO446X.mount: Deactivated successfully. Dec 2 04:59:33 localhost podman[305431]: 2025-12-02 09:59:33.52266967 +0000 UTC m=+0.155823283 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 04:59:33 localhost podman[305431]: 2025-12-02 09:59:33.566757362 +0000 UTC m=+0.199910995 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 04:59:33 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 04:59:34 localhost openstack_network_exporter[242845]: ERROR 09:59:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:34 localhost openstack_network_exporter[242845]: ERROR 09:59:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 04:59:34 localhost openstack_network_exporter[242845]: ERROR 09:59:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 04:59:34 localhost openstack_network_exporter[242845]: ERROR 09:59:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 04:59:34 localhost openstack_network_exporter[242845]: Dec 2 04:59:34 localhost openstack_network_exporter[242845]: ERROR 09:59:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 04:59:34 localhost openstack_network_exporter[242845]: Dec 2 04:59:34 localhost nova_compute[281854]: 2025-12-02 09:59:34.185 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:36 localhost podman[240799]: time="2025-12-02T09:59:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 04:59:36 localhost podman[240799]: @ - - [02/Dec/2025:09:59:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 04:59:36 localhost podman[240799]: @ - - [02/Dec/2025:09:59:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18723 "" "Go-http-client/1.1" Dec 2 04:59:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "status", "format": "json"} v 0) Dec 2 04:59:36 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2480513664' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch Dec 2 04:59:38 localhost nova_compute[281854]: 2025-12-02 09:59:38.429 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 04:59:39 localhost podman[305450]: 2025-12-02 09:59:39.1713339 +0000 UTC m=+0.087933180 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 04:59:39 localhost podman[305450]: 2025-12-02 09:59:39.181161653 +0000 UTC m=+0.097760923 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Dec 2 04:59:39 localhost nova_compute[281854]: 2025-12-02 09:59:39.191 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:39 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 04:59:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 04:59:41 localhost podman[305467]: 2025-12-02 09:59:41.435773855 +0000 UTC m=+0.079846893 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 04:59:41 localhost podman[305467]: 2025-12-02 09:59:41.476094857 +0000 UTC m=+0.120167845 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal) Dec 2 04:59:41 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 04:59:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 04:59:41 localhost systemd[1]: tmp-crun.87RCAL.mount: Deactivated successfully. Dec 2 04:59:41 localhost podman[305488]: 2025-12-02 09:59:41.623816731 +0000 UTC m=+0.102082690 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:59:41 localhost podman[305488]: 2025-12-02 09:59:41.637267392 +0000 UTC m=+0.115533331 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 04:59:41 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 04:59:43 localhost nova_compute[281854]: 2025-12-02 09:59:43.430 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:43 localhost nova_compute[281854]: 2025-12-02 09:59:43.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:44 localhost nova_compute[281854]: 2025-12-02 09:59:44.191 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 04:59:47 localhost podman[305512]: 2025-12-02 09:59:47.447263591 +0000 UTC m=+0.087052957 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 04:59:47 localhost podman[305512]: 2025-12-02 09:59:47.482873737 +0000 UTC m=+0.122663123 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 04:59:47 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 04:59:48 localhost nova_compute[281854]: 2025-12-02 09:59:48.466 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:49 localhost nova_compute[281854]: 2025-12-02 09:59:49.193 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:49 localhost nova_compute[281854]: 2025-12-02 09:59:49.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:49 localhost nova_compute[281854]: 2025-12-02 09:59:49.917 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:49 localhost nova_compute[281854]: 2025-12-02 09:59:49.917 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 04:59:49 localhost nova_compute[281854]: 2025-12-02 09:59:49.918 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 04:59:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 04:59:50 localhost nova_compute[281854]: 2025-12-02 09:59:50.414 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 04:59:50 localhost nova_compute[281854]: 2025-12-02 09:59:50.414 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 04:59:50 localhost nova_compute[281854]: 2025-12-02 09:59:50.415 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 04:59:50 localhost nova_compute[281854]: 2025-12-02 09:59:50.415 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 04:59:50 localhost podman[305532]: 2025-12-02 09:59:50.443443723 +0000 UTC m=+0.084150829 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:59:50 localhost podman[305532]: 2025-12-02 09:59:50.454976662 +0000 UTC m=+0.095683818 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 04:59:50 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 04:59:50 localhost podman[305533]: 2025-12-02 09:59:50.542712096 +0000 UTC m=+0.180567186 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 04:59:50 localhost podman[305533]: 2025-12-02 09:59:50.605037538 +0000 UTC m=+0.242892578 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 04:59:50 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 04:59:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:50 localhost nova_compute[281854]: 2025-12-02 09:59:50.948 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.059 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.059 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.060 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.060 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.901 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.902 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.903 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.903 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 04:59:51 localhost nova_compute[281854]: 2025-12-02 09:59:51.904 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:59:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:59:52 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3862452935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.372 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.436 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.437 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.652 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.653 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11690MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.654 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.654 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.807 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.808 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.809 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 04:59:52 localhost nova_compute[281854]: 2025-12-02 09:59:52.842 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 04:59:53 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 04:59:53 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3342764562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.293 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.300 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.406 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.408 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.408 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 04:59:53 localhost nova_compute[281854]: 2025-12-02 09:59:53.468 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.196 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.409 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.409 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.410 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.410 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:54 localhost nova_compute[281854]: 2025-12-02 09:59:54.824 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 04:59:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 04:59:58 localhost nova_compute[281854]: 2025-12-02 09:59:58.506 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 04:59:59 localhost nova_compute[281854]: 2025-12-02 09:59:59.198 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:00 localhost ceph-mon[298296]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[298296]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[298296]: stray daemon mgr.np0005541911.adcgiw on host np0005541911.localdomain not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[298296]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 2 05:00:00 localhost ceph-mon[298296]: stray host np0005541911.localdomain has 1 stray daemons: ['mgr.np0005541911.adcgiw'] Dec 2 05:00:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:01 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:01 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 05:00:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:00:03.044 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:00:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:00:03.045 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:00:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:00:03.047 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:00:03 localhost nova_compute[281854]: 2025-12-02 10:00:03.544 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:04 localhost openstack_network_exporter[242845]: ERROR 10:00:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:04 localhost openstack_network_exporter[242845]: ERROR 10:00:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:04 localhost openstack_network_exporter[242845]: ERROR 10:00:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:00:04 localhost openstack_network_exporter[242845]: ERROR 10:00:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:00:04 localhost openstack_network_exporter[242845]: Dec 2 05:00:04 localhost openstack_network_exporter[242845]: ERROR 10:00:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:00:04 localhost openstack_network_exporter[242845]: Dec 2 05:00:04 localhost nova_compute[281854]: 2025-12-02 10:00:04.200 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:00:04 localhost podman[305713]: 2025-12-02 10:00:04.449126209 +0000 UTC m=+0.086420031 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Dec 2 05:00:04 localhost podman[305713]: 2025-12-02 10:00:04.458361108 +0000 UTC m=+0.095654899 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:00:04 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.491984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605492034, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2689, "num_deletes": 255, "total_data_size": 7858319, "memory_usage": 8109440, "flush_reason": "Manual Compaction"} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605525162, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4761463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14440, "largest_seqno": 17124, "table_properties": {"data_size": 4750663, "index_size": 6729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26320, "raw_average_key_size": 22, "raw_value_size": 4727507, "raw_average_value_size": 3976, "num_data_blocks": 293, "num_entries": 1189, "num_filter_entries": 1189, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 1764669502, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 33246 microseconds, and 11601 cpu microseconds. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.525227) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4761463 bytes OK Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.525258) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.527263) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.527286) EVENT_LOG_v1 {"time_micros": 1764669605527280, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.527313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 7845517, prev total WAL file size 7845517, number of live WAL files 2. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.529661) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4649KB)], [21(15MB)] Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605529714, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 21088372, "oldest_snapshot_seqno": -1} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12081 keys, 18532366 bytes, temperature: kUnknown Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605619148, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 18532366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18460848, "index_size": 40249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 322973, "raw_average_key_size": 26, "raw_value_size": 18252231, "raw_average_value_size": 1510, "num_data_blocks": 1542, "num_entries": 12081, "num_filter_entries": 12081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.619542) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 18532366 bytes Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.4 rd, 206.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.5, 15.6 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12628, records dropped: 547 output_compression: NoCompression Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621537) EVENT_LOG_v1 {"time_micros": 1764669605621522, "job": 10, "event": "compaction_finished", "compaction_time_micros": 89574, "compaction_time_cpu_micros": 34056, "output_level": 6, "num_output_files": 1, "total_output_size": 18532366, "num_input_records": 12628, "num_output_records": 12081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.529544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:05.621809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:05 localhost ceph-mon[298296]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' Dec 2 05:00:06 localhost podman[240799]: time="2025-12-02T10:00:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:00:06 localhost podman[240799]: @ - - [02/Dec/2025:10:00:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:00:06 localhost podman[240799]: @ - - [02/Dec/2025:10:00:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18723 "" "Go-http-client/1.1" Dec 2 05:00:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mgr fail"} v 0) Dec 2 05:00:06 localhost ceph-mon[298296]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/1313402171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 05:00:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 e91: 6 total, 6 up, 6 in Dec 2 05:00:06 localhost systemd[1]: session-71.scope: Deactivated successfully. Dec 2 05:00:06 localhost systemd[1]: session-71.scope: Consumed 10.596s CPU time. Dec 2 05:00:06 localhost systemd-logind[757]: Session 71 logged out. Waiting for processes to exit. Dec 2 05:00:06 localhost systemd-logind[757]: Removed session 71. Dec 2 05:00:06 localhost ceph-mon[298296]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 05:00:06 localhost ceph-mon[298296]: Activating manager daemon np0005541914.lljzmk Dec 2 05:00:06 localhost ceph-mon[298296]: from='client.? 172.18.0.200:0/1313402171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 2 05:00:06 localhost ceph-mon[298296]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 2 05:00:06 localhost ceph-mon[298296]: Manager daemon np0005541914.lljzmk is now available Dec 2 05:00:07 localhost sshd[305730]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:00:07 localhost systemd-logind[757]: New session 72 of user ceph-admin. Dec 2 05:00:07 localhost systemd[1]: Started Session 72 of User ceph-admin. Dec 2 05:00:07 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 05:00:07 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch Dec 2 05:00:08 localhost systemd[1]: tmp-crun.hfbLnO.mount: Deactivated successfully. Dec 2 05:00:08 localhost podman[305841]: 2025-12-02 10:00:08.198569905 +0000 UTC m=+0.101629849 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 2 05:00:08 localhost podman[305841]: 2025-12-02 10:00:08.291334464 +0000 UTC m=+0.194394448 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7) Dec 2 05:00:08 localhost nova_compute[281854]: 2025-12-02 10:00:08.593 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[298296]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 2 05:00:08 localhost ceph-mon[298296]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 2 05:00:08 localhost ceph-mon[298296]: Cluster is now healthy Dec 2 05:00:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:09 localhost nova_compute[281854]: 2025-12-02 10:00:09.202 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:00:09 localhost podman[306011]: 2025-12-02 10:00:09.455739649 +0000 UTC m=+0.083372367 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Dec 2 05:00:09 localhost podman[306011]: 2025-12-02 10:00:09.465221745 +0000 UTC m=+0.092854523 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:00:09 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:00:09 localhost ceph-mon[298296]: [02/Dec/2025:10:00:08] ENGINE Bus STARTING Dec 2 05:00:09 localhost ceph-mon[298296]: [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765 Dec 2 05:00:09 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:09 localhost ceph-mon[298296]: [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150 Dec 2 05:00:09 localhost ceph-mon[298296]: [02/Dec/2025:10:00:09] ENGINE Bus STARTED Dec 2 05:00:09 localhost ceph-mon[298296]: [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 2 05:00:09 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.558204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610558273, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 490, "num_deletes": 252, "total_data_size": 1628548, "memory_usage": 1641288, "flush_reason": "Manual Compaction"} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610569275, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 1063710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17129, "largest_seqno": 17614, "table_properties": {"data_size": 1060733, "index_size": 960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8165, "raw_average_key_size": 21, "raw_value_size": 1054438, "raw_average_value_size": 2819, "num_data_blocks": 38, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669605, "oldest_key_time": 1764669605, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11234 microseconds, and 3800 cpu microseconds. Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.569333) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 1063710 bytes OK Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.569468) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571391) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571425) EVENT_LOG_v1 {"time_micros": 1764669610571413, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1625430, prev total WAL file size 1625754, number of live WAL files 2. Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610572358, "job": 11, "event": "table_file_deletion", "file_number": 23} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610574985, "job": 11, "event": "table_file_deletion", "file_number": 21} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.575307) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end) Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(1038KB)], [24(17MB)] Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610575414, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19596076, "oldest_snapshot_seqno": -1} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11923 keys, 17247392 bytes, temperature: kUnknown Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610672180, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17247392, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17181470, "index_size": 35037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29829, "raw_key_size": 320049, "raw_average_key_size": 26, "raw_value_size": 16980125, "raw_average_value_size": 1424, "num_data_blocks": 1326, "num_entries": 11923, "num_filter_entries": 11923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.672560) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17247392 bytes Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.674631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.3 rd, 178.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.7 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(34.6) write-amplify(16.2) OK, records in: 12455, records dropped: 532 output_compression: NoCompression Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.674663) EVENT_LOG_v1 {"time_micros": 1764669610674649, "job": 12, "event": "compaction_finished", "compaction_time_micros": 96873, "compaction_time_cpu_micros": 50530, "output_level": 6, "num_output_files": 1, "total_output_size": 17247392, "num_input_records": 12455, "num_output_records": 11923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610674955, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610677361, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.575179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.677403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.677410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.677413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.677416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:00:10.677419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:00:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:00:11 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:00:11 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:00:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:00:11 localhost systemd[1]: tmp-crun.Jz9151.mount: Deactivated successfully. Dec 2 05:00:11 localhost podman[306279]: 2025-12-02 10:00:11.624194349 +0000 UTC m=+0.095884614 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal) Dec 2 05:00:11 localhost podman[306279]: 2025-12-02 10:00:11.641898335 +0000 UTC m=+0.113588600 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:00:11 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:00:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:00:11 localhost podman[306334]: 2025-12-02 10:00:11.771669247 +0000 UTC m=+0.092030001 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:00:11 localhost podman[306334]: 2025-12-02 10:00:11.783945477 +0000 UTC m=+0.104306241 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:00:11 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:00:12 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:00:12 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:12 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf Dec 2 05:00:13 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 2 05:00:13 localhost nova_compute[281854]: 2025-12-02 10:00:13.627 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:14 localhost nova_compute[281854]: 2025-12-02 10:00:14.205 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:14 localhost ceph-mon[298296]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[298296]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[298296]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:00:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.104 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.106 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.119 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.120 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8b6b5a37-3a03-4761-8c60-9d7d8bd8f95b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.106208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3a70d48-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': '75a8f8e0ab12297ccd32ed35f456743ff8e540c0dde77c7d5b7a26bb266ede8a'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.106208', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3a72120-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': 'd6c2d03db7755622c3c6f2565f6f8f66be29ce5caf135cd2a1aa8f2a82c7a42d'}]}, 'timestamp': '2025-12-02 10:00:16.120880', '_unique_id': '667d3e2fdf5c4208afd7803c5b2aa8c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.122 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.123 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.127 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fdfb64d9-9dff-4535-b673-f6bc1dd7acad', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.123898', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3a84726-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': 'd7baa162f90ca88b0b1a0aec7466953aaee669de0293ef2807e5d1a8a183b4c8'}]}, 'timestamp': '2025-12-02 10:00:16.128419', '_unique_id': '89df3b085c0d4835b3d19ff089355d26'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.129 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.130 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.130 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.130 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95c65201-1a38-4bf4-bef3-50aff3f97b52', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.130898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3ad160c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'dfb7526caa75aa8ab5096d5deb718bab05775c8f68a76c450272efdf05afc95e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.130898', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3ad27a0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'd22c6ba064e1fc200f367de71c1f7a034fae18c3f1f528374ab1d970e0d0e22c'}]}, 'timestamp': '2025-12-02 10:00:16.160356', '_unique_id': 'e08b5d254bf148769640a6cf25bbe62d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.162 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.162 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8849388f-1751-4059-8f39-dce0ecd26c2d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.162706', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3ad97b2-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': 'ff94a62108827785206381686b7e5d4d83552bce5cf4c8cf94c059f0c42437c1'}]}, 'timestamp': '2025-12-02 10:00:16.163257', '_unique_id': '692e36a6474c4c2eb35bc2b17f92e200'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.165 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.165 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77bf126b-caa6-422f-839c-cb5d2eeb6f72', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.165470', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3ae021a-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': '35b945549208b309173ac431beb469180e005d3ecd60f370b67a6876e49f5932'}]}, 'timestamp': '2025-12-02 10:00:16.165994', '_unique_id': '68948cce943c412e858c5c17c6e6a2f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.166 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.168 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6c783bc7-f075-40a7-8a8a-1594028124e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.168869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3ae892e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'd03761be20919156bb8e4180b0beb24446c312d7c720d726c7e505cc5ae19683'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.168869', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3aea256-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'a3b26ffddff9c337dba32e6a4facf2935c8a18d1547ea3226dbaffb66b7e4915'}]}, 'timestamp': '2025-12-02 10:00:16.170129', '_unique_id': 'fb23a52902714366a1428828ea9346d1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.172 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.172 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.173 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77f45bfa-190a-426f-847d-06170e5e58ff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.172676', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3af1a92-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': '8f16ec67e976340b3dee0404910398dbfe074fd38c56a71a0ef01050ef092ab7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.172676', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3af2cee-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': '4265b989034e1e66b1dcb5bc0007985b94a93dd96ca77017d996b5e5b92b2b32'}]}, 'timestamp': '2025-12-02 10:00:16.173601', '_unique_id': '7941c4a1e7334b3b982a32a5aba386a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.175 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.175 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e619f95d-da3f-4763-8b83-d82aefd0f76a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.175905', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3af98aa-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': 'f8d2ccac784a6fd6902c17ea58eded5ae8c29e3befce4d12842d8ae7e3232b89'}]}, 'timestamp': '2025-12-02 10:00:16.176503', '_unique_id': 'e9f91d927a7f49b48a36e5f72656cfc4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.178 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.178 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03f79bb5-011a-439c-bbdc-b0c4823e39a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.178759', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b00952-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': '87deb06bdeb2cfc20e36168b83aa5a7c65d103cc09823df51455698515c980e0'}]}, 'timestamp': '2025-12-02 10:00:16.179270', '_unique_id': 'a441fc86ef97402ea97610159659a8d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.181 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.181 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4c20efe2-f163-4f9b-8dcb-63245ceeb833', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.181515', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3b075fe-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': 'a13a9ef0edb2546c2fd1bc4181093794cf23fb953261aab3a1cfc9023b48521c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.181515', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3b087f6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.325281813, 'message_signature': 'd211d61dcd3637f4aa44540c0c94827b1636f11e2b8d7b4499dec71585ca7bd8'}]}, 'timestamp': '2025-12-02 10:00:16.182482', '_unique_id': '99fb58283e144af29cc9918818262c63'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.184 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe47d1e6-f2ff-4f87-a919-945fe4823cd6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.184872', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3b0f68c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': '4cf597eefd6ea53755eda141f2a2aa8510accd2435c751468925e10b73766139'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.184872', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3b106e0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': '9ae455225eaa285467cfc858b19e904dfd32ae37df2a50ca4e7fdf6ba685e250'}]}, 'timestamp': '2025-12-02 10:00:16.185762', '_unique_id': '56bfb80bfa054187ad1b4fa49c8a5e06'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '453991f9-19ff-43f7-81c0-33c01546933c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.188029', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3b171de-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'ffd4858640832c3c7ce1adcfe4c1e84b75db2726d9366b225e7451e176de779f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.188029', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3b183a4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': '6f66a757264df726c4a95e565492b00e0edee7673a5ff7d96fc7e9f8e4c44499'}]}, 'timestamp': '2025-12-02 10:00:16.188920', '_unique_id': 'bbfa3f188214429daccc22ac979e2672'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.189 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.191 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.191 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d4f3f71-0ced-4bdb-9750-3d6b451a59af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.191146', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b1ebe6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': 'a1ccc66f86a60e8ea356e7bfc2a17baf2ab855f87b0ff949b70921bd09d0d244'}]}, 'timestamp': '2025-12-02 10:00:16.191651', '_unique_id': '5940a4e0be764d1b9c761ac75a3b585b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05bf63f6-fd8a-4fe3-bbad-f6e43f0d3dde', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.193823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3b253f6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'ebf836088d1638774bc18741a8c325e50857ed6b8833d457d917d1a3b941988d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.193823', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3b265f8-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'd6204ebd362632ec993073058af6f553dbfca0866e582146115f392fe00470c2'}]}, 'timestamp': '2025-12-02 10:00:16.194752', '_unique_id': '0b300352f33c4b5ea4fcb173b1f78d0c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6937baeb-c719-411a-88b7-6fe3c35ffdef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.196988', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b2d010-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': 'a508d2140006d9dd1d5468b635085dd3066d42635a739140cea7f986e6929b7c'}]}, 'timestamp': '2025-12-02 10:00:16.197456', '_unique_id': '63e9d289d8fd4a75aa1b79eb8b80eeb8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7d4f811d-3523-40f6-9a2b-e30899bd3cd0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:00:16.199659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b3b3385c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': 'daebb99083c8627ccb13bdc0727f3b686afb6502eb1b8bc6e1d857001d5de3a4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:00:16.199659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b3b348b0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.350007386, 'message_signature': '39abba282f25c19c9afbb23bc1b20da398d91e41674a9db0ccafbeee2ad9f4e2'}]}, 'timestamp': '2025-12-02 10:00:16.200515', '_unique_id': '77a463a484e946a2b6d902754663f762'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.202 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.219 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae22b03e-30c7-468d-94bd-dbb47f1b6d96', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:00:16.202882', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b3b62bde-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.438024119, 'message_signature': '24dbb8c3dd54b76c391c85badd8c7616e21bc21dd5dab676710674515318fd74'}]}, 'timestamp': '2025-12-02 10:00:16.219388', '_unique_id': 'd2e3643955ee46a88ea6c6eb61da5cd4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bfe1301b-d0a2-4ba3-88da-18eea969306e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.220944', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b67468-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': '18cf9fa6e90893d3486488071f7cd27c75417efa36faa21b1eb7b3f4451bcd3d'}]}, 'timestamp': '2025-12-02 10:00:16.221238', '_unique_id': '509fe3f269744d26b2ce23fecc6d5e9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.222 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd06682d-f47c-42ff-aa57-442d47283ceb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.222979', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b6c3f0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': '59a838a37dc1038fcd385e1d10cbfe01abf05ebf753e94fa2ad086698045a9af'}]}, 'timestamp': '2025-12-02 10:00:16.223275', '_unique_id': '6522b5a688484d04b34b4d74878d3f99'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.224 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.224 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 15220000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd8b70a51-8a82-4b0d-a366-15075f5a2be2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15220000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:00:16.224595', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b3b703c4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.438024119, 'message_signature': '4e974438ac71afc62b2e63b66d14f8fdc43081a47beb00f6687efca9cd04da2b'}]}, 'timestamp': '2025-12-02 10:00:16.224898', '_unique_id': '4766a431b1514d84a8f28c0d03332b33'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14afbf92-0fde-41c8-b50f-bc19de7573dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:00:16.226206', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'b3b741e0-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11778.342974778, 'message_signature': '932aa14a4d105dc5d991ca1831f7618690575360ea8c2d2294dd78b8fa88d450'}]}, 'timestamp': '2025-12-02 10:00:16.226497', '_unique_id': 'b1c8b157d67a4010a3e22055d2e9e9f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:00:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:00:16.227 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:00:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:00:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:00:18 localhost podman[306818]: 2025-12-02 10:00:18.45385446 +0000 UTC m=+0.084394516 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:00:18 localhost podman[306818]: 2025-12-02 10:00:18.46650764 +0000 UTC m=+0.097047686 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:00:18 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:00:18 localhost nova_compute[281854]: 2025-12-02 10:00:18.665 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:19 localhost nova_compute[281854]: 2025-12-02 10:00:19.209 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:00:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:00:21 localhost podman[306838]: 2025-12-02 10:00:21.448377078 +0000 UTC m=+0.086509712 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:00:21 localhost podman[306838]: 2025-12-02 10:00:21.460929575 +0000 UTC m=+0.099062199 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:00:21 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:00:21 localhost podman[306839]: 2025-12-02 10:00:21.547816647 +0000 UTC m=+0.183320281 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller) Dec 2 05:00:21 localhost podman[306839]: 2025-12-02 10:00:21.610148169 +0000 UTC m=+0.245651773 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:00:21 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:00:23 localhost nova_compute[281854]: 2025-12-02 10:00:23.718 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:24 localhost nova_compute[281854]: 2025-12-02 10:00:24.214 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:28 localhost nova_compute[281854]: 2025-12-02 10:00:28.758 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:29 localhost nova_compute[281854]: 2025-12-02 10:00:29.215 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:33 localhost nova_compute[281854]: 2025-12-02 10:00:33.789 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:34 localhost openstack_network_exporter[242845]: ERROR 10:00:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:00:34 localhost openstack_network_exporter[242845]: ERROR 10:00:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:34 localhost openstack_network_exporter[242845]: ERROR 10:00:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:00:34 localhost openstack_network_exporter[242845]: ERROR 10:00:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:00:34 localhost openstack_network_exporter[242845]: Dec 2 05:00:34 localhost openstack_network_exporter[242845]: ERROR 10:00:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:00:34 localhost openstack_network_exporter[242845]: Dec 2 05:00:34 localhost nova_compute[281854]: 2025-12-02 10:00:34.218 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:00:35 localhost podman[306886]: 2025-12-02 10:00:35.440556692 +0000 UTC m=+0.081833377 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:00:35 localhost podman[306886]: 2025-12-02 10:00:35.451975238 +0000 UTC m=+0.093251913 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_compute) Dec 2 05:00:35 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:00:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:36 localhost podman[240799]: time="2025-12-02T10:00:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:00:36 localhost podman[240799]: @ - - [02/Dec/2025:10:00:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:00:36 localhost podman[240799]: @ - - [02/Dec/2025:10:00:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18721 "" "Go-http-client/1.1" Dec 2 05:00:38 localhost nova_compute[281854]: 2025-12-02 10:00:38.839 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:39 localhost nova_compute[281854]: 2025-12-02 10:00:39.221 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:00:40 localhost podman[306904]: 2025-12-02 10:00:40.440434431 +0000 UTC m=+0.080325787 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:00:40 localhost podman[306904]: 2025-12-02 10:00:40.474975347 +0000 UTC m=+0.114866663 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:00:40 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:00:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:00:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:00:42 localhost podman[306924]: 2025-12-02 10:00:42.443748339 +0000 UTC m=+0.086510353 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:00:42 localhost podman[306924]: 2025-12-02 10:00:42.457055486 +0000 UTC m=+0.099817490 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:00:42 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:00:42 localhost podman[306923]: 2025-12-02 10:00:42.550139574 +0000 UTC m=+0.195444516 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:00:42 localhost podman[306923]: 2025-12-02 10:00:42.561853968 +0000 UTC m=+0.207158910 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal) Dec 2 05:00:42 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:00:43 localhost nova_compute[281854]: 2025-12-02 10:00:43.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:43 localhost nova_compute[281854]: 2025-12-02 10:00:43.867 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:44 localhost nova_compute[281854]: 2025-12-02 10:00:44.227 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:48 localhost nova_compute[281854]: 2025-12-02 10:00:48.877 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:49 localhost nova_compute[281854]: 2025-12-02 10:00:49.228 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:00:49 localhost podman[306967]: 2025-12-02 10:00:49.435368627 +0000 UTC m=+0.078838577 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd) Dec 2 05:00:49 localhost podman[306967]: 2025-12-02 10:00:49.446323081 +0000 UTC m=+0.089792991 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:00:49 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:00:49 localhost nova_compute[281854]: 2025-12-02 10:00:49.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:49 localhost nova_compute[281854]: 2025-12-02 10:00:49.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:00:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:50 localhost nova_compute[281854]: 2025-12-02 10:00:50.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:50 localhost nova_compute[281854]: 2025-12-02 10:00:50.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:00:50 localhost nova_compute[281854]: 2025-12-02 10:00:50.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:00:51 localhost nova_compute[281854]: 2025-12-02 10:00:51.747 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:00:51 localhost nova_compute[281854]: 2025-12-02 10:00:51.748 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:00:51 localhost nova_compute[281854]: 2025-12-02 10:00:51.748 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:00:51 localhost nova_compute[281854]: 2025-12-02 10:00:51.748 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.108 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.222 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.222 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.224 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:00:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:00:52 localhost podman[306987]: 2025-12-02 10:00:52.423549144 +0000 UTC m=+0.061704087 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 2 05:00:52 localhost systemd[1]: tmp-crun.9ecj9X.mount: Deactivated successfully. Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.494 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:00:52 localhost podman[306986]: 2025-12-02 10:00:52.494844358 +0000 UTC m=+0.134100840 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.494 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.495 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.495 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.495 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:00:52 localhost podman[306986]: 2025-12-02 10:00:52.502551495 +0000 UTC m=+0.141807997 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:00:52 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:00:52 localhost podman[306987]: 2025-12-02 10:00:52.528574743 +0000 UTC m=+0.166729686 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:00:52 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:00:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:00:52 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2791761851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.907 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.968 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:00:52 localhost nova_compute[281854]: 2025-12-02 10:00:52.969 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.188 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.190 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11689MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.190 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.191 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.246 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.247 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.247 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.281 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:00:53 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:00:53 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2694848661' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.703 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.709 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.750 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.753 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.754 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:00:53 localhost nova_compute[281854]: 2025-12-02 10:00:53.912 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:54 localhost nova_compute[281854]: 2025-12-02 10:00:54.231 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:55 localhost nova_compute[281854]: 2025-12-02 10:00:55.358 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:55 localhost nova_compute[281854]: 2025-12-02 10:00:55.359 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:55 localhost nova_compute[281854]: 2025-12-02 10:00:55.359 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:55 localhost nova_compute[281854]: 2025-12-02 10:00:55.359 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:00:55 localhost nova_compute[281854]: 2025-12-02 10:00:55.824 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:00:58 localhost nova_compute[281854]: 2025-12-02 10:00:58.942 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:00:59 localhost nova_compute[281854]: 2025-12-02 10:00:59.234 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.656126) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660656245, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 955, "num_deletes": 256, "total_data_size": 1794692, "memory_usage": 1818592, "flush_reason": "Manual Compaction"} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660665488, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1174858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17619, "largest_seqno": 18569, "table_properties": {"data_size": 1170557, "index_size": 1964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10294, "raw_average_key_size": 20, "raw_value_size": 1161498, "raw_average_value_size": 2290, "num_data_blocks": 82, "num_entries": 507, "num_filter_entries": 507, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669610, "oldest_key_time": 1764669610, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 9407 microseconds, and 3693 cpu microseconds. Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.665550) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1174858 bytes OK Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.665576) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667151) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667168) EVENT_LOG_v1 {"time_micros": 1764669660667163, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667192) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1789695, prev total WAL file size 1790019, number of live WAL files 2. Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667794) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303139' seq:0, type:0; will stop at (end) Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1147KB)], [27(16MB)] Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660667835, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18422250, "oldest_snapshot_seqno": -1} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11893 keys, 18282442 bytes, temperature: kUnknown Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660776250, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 18282442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18215227, "index_size": 36394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 320650, "raw_average_key_size": 26, "raw_value_size": 18012802, "raw_average_value_size": 1514, "num_data_blocks": 1380, "num_entries": 11893, "num_filter_entries": 11893, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.776726) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 18282442 bytes Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.778207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.7 rd, 168.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 16.4 +0.0 blob) out(17.4 +0.0 blob), read-write-amplify(31.2) write-amplify(15.6) OK, records in: 12430, records dropped: 537 output_compression: NoCompression Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.778230) EVENT_LOG_v1 {"time_micros": 1764669660778218, "job": 14, "event": "compaction_finished", "compaction_time_micros": 108548, "compaction_time_cpu_micros": 35624, "output_level": 6, "num_output_files": 1, "total_output_size": 18282442, "num_input_records": 12430, "num_output_records": 11893, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660778650, "job": 14, "event": "table_file_deletion", "file_number": 29} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660781591, "job": 14, "event": "table_file_deletion", "file_number": 27} Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.781691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.781698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.781701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.781703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:01:00.781705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:01:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:03.044 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:03.045 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:03.045 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:03 localhost nova_compute[281854]: 2025-12-02 10:01:03.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:04 localhost openstack_network_exporter[242845]: ERROR 10:01:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:01:04 localhost openstack_network_exporter[242845]: ERROR 10:01:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:04 localhost openstack_network_exporter[242845]: ERROR 10:01:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:04 localhost openstack_network_exporter[242845]: ERROR 10:01:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:01:04 localhost openstack_network_exporter[242845]: Dec 2 05:01:04 localhost openstack_network_exporter[242845]: ERROR 10:01:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:01:04 localhost openstack_network_exporter[242845]: Dec 2 05:01:04 localhost nova_compute[281854]: 2025-12-02 10:01:04.236 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:06 localhost podman[240799]: time="2025-12-02T10:01:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:01:06 localhost podman[240799]: @ - - [02/Dec/2025:10:01:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:01:06 localhost podman[240799]: @ - - [02/Dec/2025:10:01:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18729 "" "Go-http-client/1.1" Dec 2 05:01:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:01:06 localhost podman[307088]: 2025-12-02 10:01:06.451131008 +0000 UTC m=+0.087718884 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:01:06 localhost podman[307088]: 2025-12-02 10:01:06.463853209 +0000 UTC m=+0.100441085 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:01:06 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:01:09 localhost nova_compute[281854]: 2025-12-02 10:01:09.016 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:09 localhost nova_compute[281854]: 2025-12-02 10:01:09.240 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:01:11 localhost podman[307107]: 2025-12-02 10:01:11.437812024 +0000 UTC m=+0.081693893 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:01:11 localhost podman[307107]: 2025-12-02 10:01:11.472165506 +0000 UTC m=+0.116047365 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:01:11 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:01:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:01:13 localhost podman[307127]: 2025-12-02 10:01:13.443925227 +0000 UTC m=+0.087939530 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350) Dec 2 05:01:13 localhost podman[307127]: 2025-12-02 10:01:13.456125814 +0000 UTC m=+0.100140167 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.) Dec 2 05:01:13 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:01:13 localhost podman[307128]: 2025-12-02 10:01:13.545808111 +0000 UTC m=+0.186977688 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:01:13 localhost podman[307128]: 2025-12-02 10:01:13.578370255 +0000 UTC m=+0.219539842 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:01:13 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:01:14 localhost nova_compute[281854]: 2025-12-02 10:01:14.060 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:14 localhost nova_compute[281854]: 2025-12-02 10:01:14.243 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:01:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:16 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:01:19 localhost nova_compute[281854]: 2025-12-02 10:01:19.104 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:19 localhost nova_compute[281854]: 2025-12-02 10:01:19.245 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:01:20 localhost podman[307314]: 2025-12-02 10:01:20.503814795 +0000 UTC m=+0.127712187 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible) Dec 2 05:01:20 localhost podman[307314]: 2025-12-02 10:01:20.520176445 +0000 UTC m=+0.144073837 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:01:20 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:01:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:01:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:01:23 localhost systemd[1]: tmp-crun.19rXuy.mount: Deactivated successfully. Dec 2 05:01:23 localhost podman[307333]: 2025-12-02 10:01:23.45201892 +0000 UTC m=+0.089768470 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:01:23 localhost podman[307333]: 2025-12-02 10:01:23.486083064 +0000 UTC m=+0.123832644 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:01:23 localhost systemd[1]: tmp-crun.hEZ3O8.mount: Deactivated successfully. Dec 2 05:01:23 localhost podman[307334]: 2025-12-02 10:01:23.499225276 +0000 UTC m=+0.134391008 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:01:23 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:01:23 localhost podman[307334]: 2025-12-02 10:01:23.567139839 +0000 UTC m=+0.202305571 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:01:23 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:01:24 localhost nova_compute[281854]: 2025-12-02 10:01:24.107 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:24 localhost nova_compute[281854]: 2025-12-02 10:01:24.247 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:27 localhost nova_compute[281854]: 2025-12-02 10:01:27.141 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:27.140 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:01:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:27.142 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:01:29 localhost nova_compute[281854]: 2025-12-02 10:01:29.154 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:29 localhost nova_compute[281854]: 2025-12-02 10:01:29.251 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:34 localhost openstack_network_exporter[242845]: ERROR 10:01:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:01:34 localhost openstack_network_exporter[242845]: ERROR 10:01:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:34 localhost openstack_network_exporter[242845]: ERROR 10:01:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:01:34 localhost openstack_network_exporter[242845]: ERROR 10:01:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:01:34 localhost openstack_network_exporter[242845]: Dec 2 05:01:34 localhost openstack_network_exporter[242845]: ERROR 10:01:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:01:34 localhost openstack_network_exporter[242845]: Dec 2 05:01:34 localhost nova_compute[281854]: 2025-12-02 10:01:34.156 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:34 localhost nova_compute[281854]: 2025-12-02 10:01:34.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:36 localhost podman[240799]: time="2025-12-02T10:01:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:01:36 localhost podman[240799]: @ - - [02/Dec/2025:10:01:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:01:36 localhost podman[240799]: @ - - [02/Dec/2025:10:01:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1" Dec 2 05:01:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:01:36.143 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:01:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:01:37 localhost podman[307383]: 2025-12-02 10:01:37.437302408 +0000 UTC m=+0.079478813 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute) Dec 2 05:01:37 localhost podman[307383]: 2025-12-02 10:01:37.452080995 +0000 UTC m=+0.094257390 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:01:37 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:01:39 localhost nova_compute[281854]: 2025-12-02 10:01:39.187 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:39 localhost nova_compute[281854]: 2025-12-02 10:01:39.257 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:01:42 localhost podman[307402]: 2025-12-02 10:01:42.441899215 +0000 UTC m=+0.078938720 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:01:42 localhost podman[307402]: 2025-12-02 10:01:42.477095389 +0000 UTC m=+0.114134884 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:01:42 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:01:43 localhost nova_compute[281854]: 2025-12-02 10:01:43.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:44 localhost nova_compute[281854]: 2025-12-02 10:01:44.227 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:44 localhost nova_compute[281854]: 2025-12-02 10:01:44.259 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:01:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:01:44 localhost systemd[1]: tmp-crun.BrzE66.mount: Deactivated successfully. Dec 2 05:01:44 localhost podman[307420]: 2025-12-02 10:01:44.452076466 +0000 UTC m=+0.094490857 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6) Dec 2 05:01:44 localhost podman[307421]: 2025-12-02 10:01:44.500999509 +0000 UTC m=+0.135982660 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:01:44 localhost podman[307420]: 2025-12-02 10:01:44.520174683 +0000 UTC m=+0.162589064 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:01:44 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:01:44 localhost podman[307421]: 2025-12-02 10:01:44.536666256 +0000 UTC m=+0.171649397 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:01:44 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:01:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.261 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.264 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.264 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:49 localhost nova_compute[281854]: 2025-12-02 10:01:49.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:01:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:01:51 localhost podman[307463]: 2025-12-02 10:01:51.437716245 +0000 UTC m=+0.081133208 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:01:51 localhost podman[307463]: 2025-12-02 10:01:51.4740307 +0000 UTC m=+0.117447623 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd) Dec 2 05:01:51 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.933 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.952 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.953 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.953 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.954 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:01:51 localhost nova_compute[281854]: 2025-12-02 10:01:51.954 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:01:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:01:52 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3800101711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:01:52 localhost nova_compute[281854]: 2025-12-02 10:01:52.374 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.534 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.535 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.762 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.763 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11703MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.764 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.764 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.830 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.830 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.831 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:01:53 localhost nova_compute[281854]: 2025-12-02 10:01:53.874 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.266 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.268 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.269 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.269 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.297 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.298 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:01:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:01:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4063615434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.338 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.345 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:01:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.367 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.370 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:01:54 localhost nova_compute[281854]: 2025-12-02 10:01:54.370 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:01:54 localhost podman[307529]: 2025-12-02 10:01:54.446639899 +0000 UTC m=+0.085572968 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:01:54 localhost systemd[1]: tmp-crun.S18yD0.mount: Deactivated successfully. Dec 2 05:01:54 localhost podman[307528]: 2025-12-02 10:01:54.530049356 +0000 UTC m=+0.171914574 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:01:54 localhost podman[307529]: 2025-12-02 10:01:54.540327342 +0000 UTC m=+0.179260401 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 05:01:54 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:01:54 localhost podman[307528]: 2025-12-02 10:01:54.567921773 +0000 UTC m=+0.209786961 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:01:54 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.265 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.266 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.266 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.408 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.409 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.409 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.410 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:01:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.809 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.825 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.826 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:55 localhost nova_compute[281854]: 2025-12-02 10:01:55.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e92 e92: 6 total, 6 up, 6 in Dec 2 05:01:56 localhost nova_compute[281854]: 2025-12-02 10:01:56.385 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:56 localhost nova_compute[281854]: 2025-12-02 10:01:56.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:01:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 e93: 6 total, 6 up, 6 in Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.298 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.300 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.300 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.301 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.334 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:01:59 localhost nova_compute[281854]: 2025-12-02 10:01:59.336 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:00 localhost ovn_controller[154505]: 2025-12-02T10:02:00Z|00081|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory Dec 2 05:02:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:03.046 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:03.046 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:03.047 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:04 localhost openstack_network_exporter[242845]: ERROR 10:02:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:04 localhost openstack_network_exporter[242845]: ERROR 10:02:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:04 localhost openstack_network_exporter[242845]: ERROR 10:02:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:02:04 localhost openstack_network_exporter[242845]: ERROR 10:02:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:02:04 localhost openstack_network_exporter[242845]: Dec 2 05:02:04 localhost openstack_network_exporter[242845]: ERROR 10:02:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:02:04 localhost openstack_network_exporter[242845]: Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.337 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.339 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.340 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.340 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.373 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:04 localhost nova_compute[281854]: 2025-12-02 10:02:04.374 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:06 localhost podman[240799]: time="2025-12-02T10:02:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:02:06 localhost podman[240799]: @ - - [02/Dec/2025:10:02:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:02:06 localhost podman[240799]: @ - - [02/Dec/2025:10:02:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18736 "" "Go-http-client/1.1" Dec 2 05:02:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:02:08 localhost systemd[1]: tmp-crun.eaMzie.mount: Deactivated successfully. Dec 2 05:02:08 localhost podman[307574]: 2025-12-02 10:02:08.440300715 +0000 UTC m=+0.082506825 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute) Dec 2 05:02:08 localhost podman[307574]: 2025-12-02 10:02:08.47812027 +0000 UTC m=+0.120326380 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:02:08 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.375 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.378 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.378 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.378 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:09 localhost nova_compute[281854]: 2025-12-02 10:02:09.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:02:13 localhost podman[307593]: 2025-12-02 10:02:13.440327169 +0000 UTC m=+0.082094255 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:02:13 localhost podman[307593]: 2025-12-02 10:02:13.446419262 +0000 UTC m=+0.088185948 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:02:13 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.406 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.440 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.440 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5036 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.441 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.442 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:14 localhost nova_compute[281854]: 2025-12-02 10:02:14.443 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:02:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:02:15 localhost systemd[1]: tmp-crun.kUMfUb.mount: Deactivated successfully. Dec 2 05:02:15 localhost systemd[299560]: Created slice User Background Tasks Slice. Dec 2 05:02:15 localhost podman[307611]: 2025-12-02 10:02:15.443060092 +0000 UTC m=+0.086181164 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.component=ubi9-minimal-container) Dec 2 05:02:15 localhost systemd[299560]: Starting Cleanup of User's Temporary Files and Directories... Dec 2 05:02:15 localhost systemd[299560]: Finished Cleanup of User's Temporary Files and Directories. Dec 2 05:02:15 localhost podman[307611]: 2025-12-02 10:02:15.480669171 +0000 UTC m=+0.123790233 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container) Dec 2 05:02:15 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:02:15 localhost podman[307612]: 2025-12-02 10:02:15.499382563 +0000 UTC m=+0.137813439 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:02:15 localhost podman[307612]: 2025-12-02 10:02:15.51195543 +0000 UTC m=+0.150386346 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:02:15 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:02:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.104 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.105 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.131 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.132 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e09cd9d6-f618-4677-840e-0c564a65a7fc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.105587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb2f7006-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'a255f9f12c5a0d5d5d6d8e70dc0dd9a27e22e48adef1d4576cc16e604253d5fe'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.105587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb2f84f6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'fcc4db09da525c7969000476afa1dfb0e5f872695be940a07b39d99fb7d685c8'}]}, 'timestamp': '2025-12-02 10:02:16.132963', '_unique_id': '35abaf336e7949859ac710d9d3b00035'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.134 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.135 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.145 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.146 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '48d5b941-7d93-463b-b3d3-6c763461bfe1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.135967', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb319cdc-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': 'a2c27be467b98a2f4f1cb8c9ca9b0c3e14f01f1fa1a838574bdf22a6a989ebe6'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.135967', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb31b294-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': '150d9b4dc378a76604de3c0e8ee65f8b98a75e76e1a5306a09d6d8a506d072c6'}]}, 'timestamp': '2025-12-02 10:02:16.147244', '_unique_id': '05c587ffa9f34bf899599c5ebd14e0d8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.148 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.150 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '31c81874-3a72-48c7-8b58-4d003ec1ee75', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.150188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb32389a-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': 'bd34cc6d0f01e0e390ea760fa6f8f05a91e211757d92fd00edb1d4ad35a85b15'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.150188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb324be6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': '1ab8f7ded5d2906493500f3200aef7bf927abd96e5af9174b9b83b16adbbb2a5'}]}, 'timestamp': '2025-12-02 10:02:16.151159', '_unique_id': '0187e113435144d583b785f07c2c5dc9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.153 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.156 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd8b5d80-ab12-40c5-9e9d-71344ebe4d0b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.153568', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb333c7c-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': 'b42488375a3cf2cfc4f6eb342a8c5af0a8fa3cce02470045268914d7fe9a8dd2'}]}, 'timestamp': '2025-12-02 10:02:16.157358', '_unique_id': '9c01dfae69f64c2485c432d55bfd3898'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.158 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.159 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a5fa7ab-2824-44fe-bd86-e6030b7383be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.159846', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb33b18e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': 'bbf5d605c34ae673371f2dc8a36b4b4d6af4f46e692bdc569490e4d7dc064e2e'}]}, 'timestamp': '2025-12-02 10:02:16.160347', '_unique_id': '5614f0ddd9e8443a95c7d6667d900482'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.162 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.162 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '89d4e847-9cbf-42fd-9c44-86801242fee7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.162673', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3421c8-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': 'f983b72d0bde01fede380afae2cca0f6a69dcce1d624cd861595dd3e342124c8'}]}, 'timestamp': '2025-12-02 10:02:16.163293', '_unique_id': '8f5ebf26a7294791adf5d390e56bed2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.164 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.165 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.165 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16990482-79a0-4839-af96-e22ba7e6d4ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.165732', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb34973e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': '97b23688c201819248b40f15ad130e6df85ffe6a798227ae57c69e6d5611cce1'}]}, 'timestamp': '2025-12-02 10:02:16.166244', '_unique_id': '114cc659dbfd4430aaa3faf08fa70f87'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.167 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95654a20-8098-4207-af30-94ffe5579822', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.168685', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb350ade-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': '0cd9d5fdd38605f0ea6442e328b36fc6d255cc8bfe3f449f5e30de1f3531e726'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.168685', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb351c5e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.35517945, 'message_signature': 'c564dc5cf7c7618cfe82cca23340fa5df2bfb42a59df9badb415f2125a0f665b'}]}, 'timestamp': '2025-12-02 10:02:16.169600', '_unique_id': 'f9c4fef0d4b440cfaa3b01cf0d779fd6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.172 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.172 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.172 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8df9bc96-6126-4fce-b675-889bd2d3a6bf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.172160', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb359242-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': '67813b00a37f99c2d3dacfdd5ae7e06f8e7941979290978469d0e6675d36f6c6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.172160', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb35a502-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'b594c9b897b79d4d33a90e1ab3c079fd49fd0eaacd59c944c31c614337bbacd4'}]}, 'timestamp': '2025-12-02 10:02:16.173146', '_unique_id': 'b7af9c92e3c4470bb4ae73c0688fab82'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.175 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 15810000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dbc15d08-a227-4c1d-8d88-f10d4e8f25ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15810000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:02:16.175559', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'fb386fda-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.409762965, 'message_signature': '65d7aa6fafc56e8d4405eb0f90e0ec83111049f9111d096d3e22dd45e3589cc2'}]}, 'timestamp': '2025-12-02 10:02:16.191474', '_unique_id': '6ec5cc9e618f4b988659019f9ade9774'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.194 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '14dfe020-cab9-4234-8e1c-3a5bd09b4029', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.194211', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb38ef1e-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'aa4310bcd396e46247434653c86713a4e29cc08a9bc41daee9678d2b61a7faa6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.194211', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb3903e6-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'e48601afaf4e7abc9e9103d480d79be0eaa35d601202d0dc79f55f86439c5e92'}]}, 'timestamp': '2025-12-02 10:02:16.195253', '_unique_id': 'ceea1c433d82438da46af64cd11eaee8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.197 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e7b00a3e-7f15-4a82-8582-1dd50d4ceb68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.197811', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb397c36-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': '9c987d7d56f671c730eb7300e9f496bfc013a9ea0bece8fda058150e0db542bb'}]}, 'timestamp': '2025-12-02 10:02:16.198298', '_unique_id': '10a88d4c45ac44638eee5c25c34400d7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.201 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'add1ca88-7341-4e29-a884-bbc96ca6b6e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.201191', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb3a02b4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'e2639c7d2d5b93af0c00faaf7968a77853de251ea1a35ea4d0afc49450e53933'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.201191', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb3a18f8-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'fa60656bdb22c9081dc711b7cfd899a8ab1834da4cb28387c64220f9ebfb51fe'}]}, 'timestamp': '2025-12-02 10:02:16.202270', '_unique_id': '882db7b3046e49dfae26b8ebbb73fc7c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.204 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e4a7f6b9-5ab7-4c39-a612-de4a749dedfe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.204889', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3a90b2-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': '94724d8d1d4f2971d476c905f865114fdb4df940b6baf6577a421be09e1d46f8'}]}, 'timestamp': '2025-12-02 10:02:16.205360', '_unique_id': '93b7bd9ea36b4e2c9ea034638a0b7193'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.206 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.207 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '410da770-f039-468d-912b-bc930278c44e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.207765', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3b0420-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': 'b15b88eeff625dde81bf44136fab72a4e8dde484277fd205ecbf025b48369b5e'}]}, 'timestamp': '2025-12-02 10:02:16.208318', '_unique_id': 'c91350ba56ae49d48e9bfa0c9f5a7b56'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.209 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.210 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.210 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43b97fb2-44ad-4d72-b531-e8e06167b43c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:02:16.210721', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'fb3b740a-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.409762965, 'message_signature': '3e7fd3dabd6bfe7610a782d810676958d1a204f24a7e53a7db89e08a7fe8fbc2'}]}, 'timestamp': '2025-12-02 10:02:16.211166', '_unique_id': '556521fdfb3d458c8e6a7114d4166b43'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6dbb797-4663-4dfd-ab5e-2f87f52aa2a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.213338', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb3bda12-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': '601c0a785ff034b4a3cd06d58fcb480508c1155efb575ed55a1dd494849ba054'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.213338', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb3beb88-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': '5631e2e7e913bbd4027e30ddcb462fea579bc6a5284c3ccd387ea973857f0e72'}]}, 'timestamp': '2025-12-02 10:02:16.214207', '_unique_id': 'a7edec14717d4d668a2135a1dc8b4588'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.216 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '055ab8dd-3e8d-4a9e-851d-7d7407b70911', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.216583', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3c5a96-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': '0631d6dc7d10a3f22e9d5d95994cfffce26aa67e73a5849a1c9564bd03c2dbc0'}]}, 'timestamp': '2025-12-02 10:02:16.217079', '_unique_id': '6ea71566d682449ab0534212fd8a452f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '785c7af2-9ddb-43e0-9a06-5873d5551a2d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.220234', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3ceed4-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': 'edecdcced86536e860068dca218419ca5ce8a3acbd4c131f076007b6819b3160'}]}, 'timestamp': '2025-12-02 10:02:16.221016', '_unique_id': 'b0d70c97d35b4b0db4e3dc06918d16b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2eab6219-524b-42ad-b907-f22d5bebe898', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:02:16.223699', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fb3d6c24-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': '71aa76c78f2fcb2e54552428264edd8a0624fed02dfa91d67a6f50e864c7e1f8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:02:16.223699', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fb3d76d8-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.324695962, 'message_signature': 'c15da7fe6216236c638e222f7321c91b12436c8e54e21978fcdf702c3585b22c'}]}, 'timestamp': '2025-12-02 10:02:16.224282', '_unique_id': '86eb41953eca439da6581be9f15642da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.224 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.225 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.225 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4658116c-2d11-4c2c-b1f1-cc1ff6674e09', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:02:16.225778', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'fb3dbd78-cf65-11f0-a0da-fa163e3f40cc', 'monotonic_time': 11898.37270087, 'message_signature': '7d634b636f180a9cf526b5148fe046bd39f4a0790de9352441a5ce1d2ab52721'}]}, 'timestamp': '2025-12-02 10:02:16.226087', '_unique_id': 'e31c3f4c6cce46f59f39125d0dcf7a99'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.227 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:02:16.227 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:02:16 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:02:16 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:02:16 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.443 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.446 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.446 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.446 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.481 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:19 localhost nova_compute[281854]: 2025-12-02 10:02:19.482 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:02:22 localhost podman[307741]: 2025-12-02 10:02:22.450791611 +0000 UTC m=+0.086651137 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:02:22 localhost podman[307741]: 2025-12-02 10:02:22.488284617 +0000 UTC m=+0.124144163 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:02:22 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.483 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.485 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.485 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.485 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.523 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:24 localhost nova_compute[281854]: 2025-12-02 10:02:24.523 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:02:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:02:25 localhost systemd[1]: tmp-crun.5sdv63.mount: Deactivated successfully. Dec 2 05:02:25 localhost podman[307760]: 2025-12-02 10:02:25.458909602 +0000 UTC m=+0.097407255 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:02:25 localhost podman[307760]: 2025-12-02 10:02:25.498195947 +0000 UTC m=+0.136693660 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:02:25 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:02:25 localhost podman[307761]: 2025-12-02 10:02:25.503361555 +0000 UTC m=+0.138444236 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:02:25 localhost podman[307761]: 2025-12-02 10:02:25.584179144 +0000 UTC m=+0.219261785 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:02:25 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:02:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:28 localhost nova_compute[281854]: 2025-12-02 10:02:28.872 281858 DEBUG oslo_concurrency.processutils [None req-23272247-f378-467d-94da-4db8e7b85b30 a61e36d3a8ec4e5abb065f4dc9d19030 bacffdfceba742b2a0f3443d4df622d9 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:28 localhost nova_compute[281854]: 2025-12-02 10:02:28.892 281858 DEBUG oslo_concurrency.processutils [None req-23272247-f378-467d-94da-4db8e7b85b30 a61e36d3a8ec4e5abb065f4dc9d19030 bacffdfceba742b2a0f3443d4df622d9 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:29 localhost nova_compute[281854]: 2025-12-02 10:02:29.038 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:29.038 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:02:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:29.040 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:02:29 localhost nova_compute[281854]: 2025-12-02 10:02:29.570 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:31 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:31.042 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:02:34 localhost openstack_network_exporter[242845]: ERROR 10:02:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:02:34 localhost openstack_network_exporter[242845]: ERROR 10:02:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:34 localhost openstack_network_exporter[242845]: ERROR 10:02:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:02:34 localhost openstack_network_exporter[242845]: ERROR 10:02:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:02:34 localhost openstack_network_exporter[242845]: Dec 2 05:02:34 localhost openstack_network_exporter[242845]: ERROR 10:02:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:02:34 localhost openstack_network_exporter[242845]: Dec 2 05:02:34 localhost nova_compute[281854]: 2025-12-02 10:02:34.572 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:34 localhost nova_compute[281854]: 2025-12-02 10:02:34.575 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:36 localhost podman[240799]: time="2025-12-02T10:02:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:02:36 localhost podman[240799]: @ - - [02/Dec/2025:10:02:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:02:36 localhost podman[240799]: @ - - [02/Dec/2025:10:02:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18739 "" "Go-http-client/1.1" Dec 2 05:02:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:38.956 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpf9vaplw0/privsep.sock']#033[00m Dec 2 05:02:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:02:39 localhost podman[307813]: 2025-12-02 10:02:39.163413996 +0000 UTC m=+0.083261505 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:02:39 localhost podman[307813]: 2025-12-02 10:02:39.173681892 +0000 UTC m=+0.093529391 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Dec 2 05:02:39 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:02:39 localhost nova_compute[281854]: 2025-12-02 10:02:39.547 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:39.554 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:02:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:39.449 307833 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:02:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:39.454 307833 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:02:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:39.458 307833 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 2 05:02:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:39.458 307833 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307833#033[00m Dec 2 05:02:39 localhost nova_compute[281854]: 2025-12-02 10:02:39.574 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.108 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpop6amzyq/privsep.sock']#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.781 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.671 307842 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.676 307842 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.680 307842 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 2 05:02:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:40.680 307842 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307842#033[00m Dec 2 05:02:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:41.845 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpzogwehhu/privsep.sock']#033[00m Dec 2 05:02:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:42.456 263406 INFO oslo.privsep.daemon [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:02:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:42.367 307854 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:02:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:42.372 307854 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:02:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:42.375 307854 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 2 05:02:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:42.376 307854 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307854#033[00m Dec 2 05:02:42 localhost nova_compute[281854]: 2025-12-02 10:02:42.592 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:43.856 263406 INFO neutron.agent.linux.ip_lib [None req-581de5e2-9df3-4050-9744-5ce3f59e019f - - - - - -] Device tapfbe9f539-2c cannot be used as it has no MAC address#033[00m Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.932 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost kernel: device tapfbe9f539-2c entered promiscuous mode Dec 2 05:02:43 localhost NetworkManager[5965]: [1764669763.9440] manager: (tapfbe9f539-2c): new Generic device (/org/freedesktop/NetworkManager/Devices/18) Dec 2 05:02:43 localhost ovn_controller[154505]: 2025-12-02T10:02:43Z|00082|binding|INFO|Claiming lport fbe9f539-2caa-4225-b0aa-ee0756eec0f0 for this chassis. Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.945 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost ovn_controller[154505]: 2025-12-02T10:02:43Z|00083|binding|INFO|fbe9f539-2caa-4225-b0aa-ee0756eec0f0: Claiming unknown Dec 2 05:02:43 localhost ovn_controller[154505]: 2025-12-02T10:02:43Z|00084|binding|INFO|Setting lport fbe9f539-2caa-4225-b0aa-ee0756eec0f0 ovn-installed in OVS Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.954 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost systemd-udevd[307869]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.956 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost journal[230136]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 2 05:02:43 localhost journal[230136]: hostname: np0005541913.localdomain Dec 2 05:02:43 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:43 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:43 localhost nova_compute[281854]: 2025-12-02 10:02:43.982 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:02:43 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:43 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:43 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:44 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:44 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:44 localhost journal[230136]: ethtool ioctl error on tapfbe9f539-2c: No such device Dec 2 05:02:44 localhost nova_compute[281854]: 2025-12-02 10:02:44.017 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:44 localhost nova_compute[281854]: 2025-12-02 10:02:44.047 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:44 localhost podman[307878]: 2025-12-02 10:02:44.1187379 +0000 UTC m=+0.120710900 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 2 05:02:44 localhost podman[307878]: 2025-12-02 10:02:44.149524576 +0000 UTC m=+0.151497526 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:02:44 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:02:44 localhost ovn_controller[154505]: 2025-12-02T10:02:44Z|00085|binding|INFO|Setting lport fbe9f539-2caa-4225-b0aa-ee0756eec0f0 up in Southbound Dec 2 05:02:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:44.261 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbe9f539-2caa-4225-b0aa-ee0756eec0f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:02:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:44.264 160221 INFO neutron.agent.ovn.metadata.agent [-] Port fbe9f539-2caa-4225-b0aa-ee0756eec0f0 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f bound to our chassis#033[00m Dec 2 05:02:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:44.268 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port b22990f2-0db4-407c-a5b6-65e7991152d1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:02:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:44.268 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:02:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:02:44.270 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6d383b1e-7432-428b-b66d-b227847827ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:02:44 localhost nova_compute[281854]: 2025-12-02 10:02:44.615 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:45 localhost podman[307959]: Dec 2 05:02:45 localhost podman[307959]: 2025-12-02 10:02:45.279431906 +0000 UTC m=+0.083835790 container create 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:02:45 localhost systemd[1]: Started libpod-conmon-2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2.scope. Dec 2 05:02:45 localhost podman[307959]: 2025-12-02 10:02:45.240516082 +0000 UTC m=+0.044920026 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:02:45 localhost systemd[1]: Started libcrun container. Dec 2 05:02:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93ad5d2b9af04d633613c8f460d48e56923a84b4e7f2b732ec5f908e2b44d433/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:02:45 localhost podman[307959]: 2025-12-02 10:02:45.356396792 +0000 UTC m=+0.160800706 container init 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:02:45 localhost podman[307959]: 2025-12-02 10:02:45.36602938 +0000 UTC m=+0.170433284 container start 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:02:45 localhost dnsmasq[307978]: started, version 2.85 cachesize 150 Dec 2 05:02:45 localhost dnsmasq[307978]: DNS service limited to local subnets Dec 2 05:02:45 localhost dnsmasq[307978]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:02:45 localhost dnsmasq[307978]: warning: no upstream servers configured Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:02:45 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 0 addresses Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:02:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:45.436 263406 INFO neutron.agent.dhcp.agent [None req-8bb7a97c-e972-4f91-946e-ce71eb4abfe8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:40Z, description=, device_id=f7309812-362b-4bd1-84da-e909158b6cbe, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7878efd7-90f1-41dd-b669-949b08145e13, ip_allocation=immediate, mac_address=fa:16:3e:fd:f0:c0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:02:35Z, description=, dns_domain=, id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-307256986-network, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28433, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=182, status=ACTIVE, subnets=['9bd66995-30b3-4c53-b58b-ce3f8d5848fa'], tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:02:37Z, vlan_transparent=None, network_id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, port_security_enabled=False, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=190, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:02:40Z on network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:02:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:45.552 263406 INFO neutron.agent.dhcp.agent [None req-97c34d3b-174e-4349-83a3-aca6766d33fe - - - - - -] DHCP configuration for ports {'ea045be8-e121-4ff5-bb82-2a757b7ce736'} is completed#033[00m Dec 2 05:02:45 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 1 addresses Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:02:45 localhost podman[307996]: 2025-12-02 10:02:45.645504 +0000 UTC m=+0.053117386 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:02:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:45.762 263406 INFO neutron.agent.dhcp.agent [None req-f8296964-90c5-493b-9fe6-7a4ac869da28 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:40Z, description=, device_id=f7309812-362b-4bd1-84da-e909158b6cbe, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7878efd7-90f1-41dd-b669-949b08145e13, ip_allocation=immediate, mac_address=fa:16:3e:fd:f0:c0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:02:35Z, description=, dns_domain=, id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-307256986-network, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28433, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=182, status=ACTIVE, subnets=['9bd66995-30b3-4c53-b58b-ce3f8d5848fa'], tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:02:37Z, vlan_transparent=None, network_id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, port_security_enabled=False, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=190, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:02:40Z on network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:02:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:45.905 263406 INFO neutron.agent.dhcp.agent [None req-7c9cfc7d-febc-40dc-846c-5a189c8e6784 - - - - - -] DHCP configuration for ports {'7878efd7-90f1-41dd-b669-949b08145e13'} is completed#033[00m Dec 2 05:02:45 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 1 addresses Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:02:45 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:02:45 localhost podman[308033]: 2025-12-02 10:02:45.996036716 +0000 UTC m=+0.057620777 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:02:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:02:46 localhost podman[308052]: 2025-12-02 10:02:46.197197585 +0000 UTC m=+0.085038993 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc.) Dec 2 05:02:46 localhost podman[308052]: 2025-12-02 10:02:46.211214201 +0000 UTC m=+0.099055609 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 05:02:46 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:02:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:02:46.255 263406 INFO neutron.agent.dhcp.agent [None req-6dfbdc49-02de-4abd-aace-58ed827e1bbc - - - - - -] DHCP configuration for ports {'7878efd7-90f1-41dd-b669-949b08145e13'} is completed#033[00m Dec 2 05:02:46 localhost podman[308054]: 2025-12-02 10:02:46.304659488 +0000 UTC m=+0.189926398 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:02:46 localhost podman[308054]: 2025-12-02 10:02:46.316049234 +0000 UTC m=+0.201316124 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:02:46 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.122984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768123046, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1557, "num_deletes": 251, "total_data_size": 2356118, "memory_usage": 2499776, "flush_reason": "Manual Compaction"} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768134351, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1534273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18574, "largest_seqno": 20126, "table_properties": {"data_size": 1528251, "index_size": 3300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13615, "raw_average_key_size": 20, "raw_value_size": 1515825, "raw_average_value_size": 2321, "num_data_blocks": 142, "num_entries": 653, "num_filter_entries": 653, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669660, "oldest_key_time": 1764669660, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 11391 microseconds, and 4530 cpu microseconds. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.134397) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1534273 bytes OK Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.134421) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.136803) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.136824) EVENT_LOG_v1 {"time_micros": 1764669768136817, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.136845) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2348808, prev total WAL file size 2348808, number of live WAL files 2. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1498KB)], [30(17MB)] Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768137758, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19816715, "oldest_snapshot_seqno": -1} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12014 keys, 17167464 bytes, temperature: kUnknown Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768230034, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17167464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17099933, "index_size": 36390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 323655, "raw_average_key_size": 26, "raw_value_size": 16895879, "raw_average_value_size": 1406, "num_data_blocks": 1378, "num_entries": 12014, "num_filter_entries": 12014, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.230367) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17167464 bytes Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.232745) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.5 rd, 185.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.4 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(24.1) write-amplify(11.2) OK, records in: 12546, records dropped: 532 output_compression: NoCompression Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.232774) EVENT_LOG_v1 {"time_micros": 1764669768232761, "job": 16, "event": "compaction_finished", "compaction_time_micros": 92369, "compaction_time_cpu_micros": 44721, "output_level": 6, "num_output_files": 1, "total_output_size": 17167464, "num_input_records": 12546, "num_output_records": 12014, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768233115, "job": 16, "event": "table_file_deletion", "file_number": 32} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768235701, "job": 16, "event": "table_file_deletion", "file_number": 30} Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.235806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.235815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.235818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.235821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:48 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:02:48.235824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.618 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4886-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.620 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.620 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.620 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.647 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:49 localhost nova_compute[281854]: 2025-12-02 10:02:49.647 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:51 localhost nova_compute[281854]: 2025-12-02 10:02:51.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:51 localhost nova_compute[281854]: 2025-12-02 10:02:51.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:02:52 localhost nova_compute[281854]: 2025-12-02 10:02:52.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:52 localhost nova_compute[281854]: 2025-12-02 10:02:52.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:02:52 localhost nova_compute[281854]: 2025-12-02 10:02:52.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:02:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:02:52 localhost podman[308100]: 2025-12-02 10:02:52.975692662 +0000 UTC m=+0.112327946 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:02:52 localhost podman[308100]: 2025-12-02 10:02:52.99313321 +0000 UTC m=+0.129768474 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:02:53 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.075 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.076 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.076 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.076 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.650 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.692 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.692 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5044 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.692 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.693 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:02:54 localhost nova_compute[281854]: 2025-12-02 10:02:54.696 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.244 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.270 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.271 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.271 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.272 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.293 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.294 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.294 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.295 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.295 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:02:55 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/660414268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.792 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.864 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:02:55 localhost nova_compute[281854]: 2025-12-02 10:02:55.864 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.093 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.095 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11388MB free_disk=41.833717346191406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.095 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.096 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:02:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:02:56 localhost podman[308141]: 2025-12-02 10:02:56.446295843 +0000 UTC m=+0.082980197 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:02:56 localhost podman[308141]: 2025-12-02 10:02:56.454981857 +0000 UTC m=+0.091666211 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:02:56 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:02:56 localhost podman[308142]: 2025-12-02 10:02:56.504126035 +0000 UTC m=+0.139043273 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Dec 2 05:02:56 localhost podman[308142]: 2025-12-02 10:02:56.551104796 +0000 UTC m=+0.186022104 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 2 05:02:56 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.621 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.632 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.633 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.633 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:02:56 localhost nova_compute[281854]: 2025-12-02 10:02:56.878 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.141 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.142 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.162 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.187 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.231 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:02:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e94 e94: 6 total, 6 up, 6 in Dec 2 05:02:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:02:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2247793407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.698 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.705 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.729 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.732 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.732 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.733 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:57 localhost nova_compute[281854]: 2025-12-02 10:02:57.734 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:02:59 localhost nova_compute[281854]: 2025-12-02 10:02:59.305 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:59 localhost nova_compute[281854]: 2025-12-02 10:02:59.306 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:59 localhost nova_compute[281854]: 2025-12-02 10:02:59.306 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:59 localhost nova_compute[281854]: 2025-12-02 10:02:59.307 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:02:59 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e95 e95: 6 total, 6 up, 6 in Dec 2 05:02:59 localhost nova_compute[281854]: 2025-12-02 10:02:59.724 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:01 localhost nova_compute[281854]: 2025-12-02 10:03:01.723 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:01 localhost nova_compute[281854]: 2025-12-02 10:03:01.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:01 localhost nova_compute[281854]: 2025-12-02 10:03:01.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:03:01 localhost nova_compute[281854]: 2025-12-02 10:03:01.851 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:03:01 localhost nova_compute[281854]: 2025-12-02 10:03:01.852 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:03.047 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:03.047 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:03.048 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:04 localhost openstack_network_exporter[242845]: ERROR 10:03:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:04 localhost openstack_network_exporter[242845]: ERROR 10:03:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:04 localhost openstack_network_exporter[242845]: ERROR 10:03:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:03:04 localhost openstack_network_exporter[242845]: ERROR 10:03:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:03:04 localhost openstack_network_exporter[242845]: Dec 2 05:03:04 localhost openstack_network_exporter[242845]: ERROR 10:03:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:03:04 localhost openstack_network_exporter[242845]: Dec 2 05:03:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:03:04.330 2 INFO neutron.agent.securitygroups_rpc [None req-5c06dfad-89c5-4abc-a7de-de583f339085 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:04.421 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=31de197b-ef56-4d2a-9fa2-293715a60004, ip_allocation=immediate, mac_address=fa:16:3e:8f:bb:bd, name=tempest-parent-17247491, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:02:35Z, description=, dns_domain=, id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-307256986-network, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28433, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=182, status=ACTIVE, subnets=['9bd66995-30b3-4c53-b58b-ce3f8d5848fa'], tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:02:37Z, vlan_transparent=None, network_id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5c93e274-85ac-42d3-b949-bdb62e6b8c39'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:03:03Z on network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:03:04 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 2 addresses Dec 2 05:03:04 localhost podman[308227]: 2025-12-02 10:03:04.646719008 +0000 UTC m=+0.060998718 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:03:04 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:03:04 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:03:04 localhost nova_compute[281854]: 2025-12-02 10:03:04.751 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:04.881 263406 INFO neutron.agent.dhcp.agent [None req-f663a97d-1aec-4c39-9524-3a24e53a4d1b - - - - - -] DHCP configuration for ports {'31de197b-ef56-4d2a-9fa2-293715a60004'} is completed#033[00m Dec 2 05:03:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e96 e96: 6 total, 6 up, 6 in Dec 2 05:03:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:06 localhost podman[240799]: time="2025-12-02T10:03:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:03:06 localhost podman[240799]: @ - - [02/Dec/2025:10:03:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:03:06 localhost podman[240799]: @ - - [02/Dec/2025:10:03:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1" Dec 2 05:03:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:03:07.116 2 INFO neutron.agent.securitygroups_rpc [None req-897aec69-e9e3-465e-bb92-a062d09dda9e 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:07.313 263406 INFO neutron.agent.linux.ip_lib [None req-40676a43-f972-46a9-93c8-453d0ba44b2e - - - - - -] Device tap07dfafb4-09 cannot be used as it has no MAC address#033[00m Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.335 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:07 localhost kernel: device tap07dfafb4-09 entered promiscuous mode Dec 2 05:03:07 localhost ovn_controller[154505]: 2025-12-02T10:03:07Z|00086|binding|INFO|Claiming lport 07dfafb4-0984-469d-a49c-9faf3746b302 for this chassis. Dec 2 05:03:07 localhost ovn_controller[154505]: 2025-12-02T10:03:07Z|00087|binding|INFO|07dfafb4-0984-469d-a49c-9faf3746b302: Claiming unknown Dec 2 05:03:07 localhost NetworkManager[5965]: [1764669787.3468] manager: (tap07dfafb4-09): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.346 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:07 localhost systemd-udevd[308260]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:07.357 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=07dfafb4-0984-469d-a49c-9faf3746b302) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:07 localhost ovn_controller[154505]: 2025-12-02T10:03:07Z|00088|binding|INFO|Setting lport 07dfafb4-0984-469d-a49c-9faf3746b302 ovn-installed in OVS Dec 2 05:03:07 localhost ovn_controller[154505]: 2025-12-02T10:03:07Z|00089|binding|INFO|Setting lport 07dfafb4-0984-469d-a49c-9faf3746b302 up in Southbound Dec 2 05:03:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:07.360 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 07dfafb4-0984-469d-a49c-9faf3746b302 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda bound to our chassis#033[00m Dec 2 05:03:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:07.363 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 50e76764-b6f4-47d9-9fe0-99e7b5813c75 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:03:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:07.363 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3673812c-f461-4e86-831f-b7a7821f4bda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.364 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:07.364 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4c52e8d4-16f2-46e2-8184-1f9d26aae144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.398 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.432 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:07 localhost nova_compute[281854]: 2025-12-02 10:03:07.461 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost podman[308316]: Dec 2 05:03:08 localhost podman[308316]: 2025-12-02 10:03:08.266040531 +0000 UTC m=+0.077841850 container create 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:08 localhost systemd[1]: Started libpod-conmon-1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2.scope. Dec 2 05:03:08 localhost podman[308316]: 2025-12-02 10:03:08.219998855 +0000 UTC m=+0.031800174 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:03:08 localhost systemd[1]: tmp-crun.pKIA95.mount: Deactivated successfully. Dec 2 05:03:08 localhost systemd[1]: Started libcrun container. Dec 2 05:03:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b24a22bfd2247520c320aa8b36a4cd59aff7c93df00851a3bdf42877c37d8eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:08 localhost podman[308316]: 2025-12-02 10:03:08.350642001 +0000 UTC m=+0.162443310 container init 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:03:08 localhost podman[308316]: 2025-12-02 10:03:08.359392406 +0000 UTC m=+0.171193715 container start 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:03:08 localhost dnsmasq[308334]: started, version 2.85 cachesize 150 Dec 2 05:03:08 localhost dnsmasq[308334]: DNS service limited to local subnets Dec 2 05:03:08 localhost dnsmasq[308334]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:03:08 localhost dnsmasq[308334]: warning: no upstream servers configured Dec 2 05:03:08 localhost dnsmasq-dhcp[308334]: DHCP, static leases only on 19.80.0.0, lease time 1d Dec 2 05:03:08 localhost dnsmasq[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/addn_hosts - 0 addresses Dec 2 05:03:08 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/host Dec 2 05:03:08 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/opts Dec 2 05:03:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:08.424 263406 INFO neutron.agent.dhcp.agent [None req-3cee9568-4648-4e84-b0aa-c395d14e194f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:06Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=40590dd1-9250-4409-a2d0-cd4f4774bfc8, ip_allocation=immediate, mac_address=fa:16:3e:51:01:78, name=tempest-subport-1284966936, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:04Z, description=, dns_domain=, id=3673812c-f461-4e86-831f-b7a7821f4bda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1030391115, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36642, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=325, status=ACTIVE, subnets=['f4f4111e-f6c8-4f21-b1f7-bb1c4d497d49'], tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:03:05Z, vlan_transparent=None, network_id=3673812c-f461-4e86-831f-b7a7821f4bda, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5c93e274-85ac-42d3-b949-bdb62e6b8c39'], standard_attr_id=347, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, updated_at=2025-12-02T10:03:06Z on network 3673812c-f461-4e86-831f-b7a7821f4bda#033[00m Dec 2 05:03:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:08.542 263406 INFO neutron.agent.linux.ip_lib [None req-c3bd8ac9-86de-482c-b6ff-e04635e7f4ea - - - - - -] Device tapc4946b01-03 cannot be used as it has no MAC address#033[00m Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.562 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 05:03:08 localhost kernel: device tapc4946b01-03 entered promiscuous mode Dec 2 05:03:08 localhost NetworkManager[5965]: [1764669788.5683] manager: (tapc4946b01-03): new Generic device (/org/freedesktop/NetworkManager/Devices/20) Dec 2 05:03:08 localhost ovn_controller[154505]: 2025-12-02T10:03:08Z|00090|binding|INFO|Claiming lport c4946b01-0395-4a62-9a39-4286d5803bca for this chassis. Dec 2 05:03:08 localhost ovn_controller[154505]: 2025-12-02T10:03:08Z|00091|binding|INFO|c4946b01-0395-4a62-9a39-4286d5803bca: Claiming unknown Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.570 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:08.575 263406 INFO neutron.agent.dhcp.agent [None req-35bec98c-b180-4377-82b8-1b4b7ca641aa - - - - - -] DHCP configuration for ports {'ba8757f7-1076-4bc0-8968-1084ffa48766'} is completed#033[00m Dec 2 05:03:08 localhost ovn_controller[154505]: 2025-12-02T10:03:08Z|00092|binding|INFO|Setting lport c4946b01-0395-4a62-9a39-4286d5803bca ovn-installed in OVS Dec 2 05:03:08 localhost ovn_controller[154505]: 2025-12-02T10:03:08Z|00093|binding|INFO|Setting lport c4946b01-0395-4a62-9a39-4286d5803bca up in Southbound Dec 2 05:03:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:08.583 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4946b01-0395-4a62-9a39-4286d5803bca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:08.584 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c4946b01-0395-4a62-9a39-4286d5803bca in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 bound to our chassis#033[00m Dec 2 05:03:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:08.586 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 13bbad22-ab61-4b1f-849e-c651aa8f3297 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:03:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:08.587 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c67a5572-7a25-4d3f-9500-76a90bd85cb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.605 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.628 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:08 localhost dnsmasq[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/addn_hosts - 1 addresses Dec 2 05:03:08 localhost podman[308363]: 2025-12-02 10:03:08.650675123 +0000 UTC m=+0.042769659 container kill 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:03:08 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/host Dec 2 05:03:08 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/opts Dec 2 05:03:08 localhost nova_compute[281854]: 2025-12-02 10:03:08.653 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:09.182 263406 INFO neutron.agent.dhcp.agent [None req-393758a9-a6e3-41ab-a514-43b6b1d6d252 - - - - - -] DHCP configuration for ports {'40590dd1-9250-4409-a2d0-cd4f4774bfc8'} is completed#033[00m Dec 2 05:03:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:03:09 localhost systemd[1]: tmp-crun.Zidxs0.mount: Deactivated successfully. Dec 2 05:03:09 localhost podman[308414]: 2025-12-02 10:03:09.391672397 +0000 UTC m=+0.089552814 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:03:09 localhost podman[308414]: 2025-12-02 10:03:09.402835396 +0000 UTC m=+0.100715813 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:09 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:03:09 localhost podman[308455]: Dec 2 05:03:09 localhost podman[308455]: 2025-12-02 10:03:09.539245527 +0000 UTC m=+0.065499328 container create 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:03:09 localhost systemd[1]: Started libpod-conmon-77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7.scope. Dec 2 05:03:09 localhost systemd[1]: Started libcrun container. Dec 2 05:03:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896dba9b1a38f0638159f863e9536c69068bcbb89b8facb5a357e5a5dc8cf960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:09 localhost podman[308455]: 2025-12-02 10:03:09.601567379 +0000 UTC m=+0.127821180 container init 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:03:09 localhost podman[308455]: 2025-12-02 10:03:09.504564676 +0000 UTC m=+0.030818497 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:03:09 localhost podman[308455]: 2025-12-02 10:03:09.609678407 +0000 UTC m=+0.135932208 container start 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:03:09 localhost dnsmasq[308473]: started, version 2.85 cachesize 150 Dec 2 05:03:09 localhost dnsmasq[308473]: DNS service limited to local subnets Dec 2 05:03:09 localhost dnsmasq[308473]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:03:09 localhost dnsmasq[308473]: warning: no upstream servers configured Dec 2 05:03:09 localhost dnsmasq-dhcp[308473]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:03:09 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 0 addresses Dec 2 05:03:09 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:03:09 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:03:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:09.728 263406 INFO neutron.agent.dhcp.agent [None req-2097c0e3-0c94-445c-a2fd-9bfd4cf2c44a - - - - - -] DHCP configuration for ports {'202be55f-4a2f-4e8a-884e-d4a72a4d525d'} is completed#033[00m Dec 2 05:03:09 localhost nova_compute[281854]: 2025-12-02 10:03:09.781 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:09 localhost nova_compute[281854]: 2025-12-02 10:03:09.940 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.712 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.712 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.732 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.807 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.808 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.816 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.817 281858 INFO nova.compute.claims [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Claim successful on node np0005541913.localdomain#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.821 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:12 localhost nova_compute[281854]: 2025-12-02 10:03:12.923 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:13 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2025890447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.379 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.386 281858 DEBUG nova.compute.provider_tree [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.401 281858 DEBUG nova.scheduler.client.report [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.423 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.424 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.466 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.467 281858 DEBUG nova.network.neutron [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.478 281858 INFO nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.494 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.577 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.578 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.579 281858 INFO nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating image(s)#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.613 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.646 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.683 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.687 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.688 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.701 281858 WARNING oslo_policy.policy [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.701 281858 WARNING oslo_policy.policy [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.703 281858 DEBUG nova.policy [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60f523e6d03743daa3ff6f5bc7122d00', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cccbafb2e3c343b2aab51714734bddce', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m Dec 2 05:03:13 localhost nova_compute[281854]: 2025-12-02 10:03:13.738 281858 DEBUG nova.virt.libvirt.imagebackend [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 2 05:03:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:14.234 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005541913.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:03Z, description=, device_id=63092ab0-9432-4c74-933e-e9d5428e6162, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-861747463, extra_dhcp_opts=[], fixed_ips=[], id=31de197b-ef56-4d2a-9fa2-293715a60004, ip_allocation=immediate, mac_address=fa:16:3e:8f:bb:bd, name=tempest-parent-17247491, network_id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['5c93e274-85ac-42d3-b949-bdb62e6b8c39'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, trunk_details=sub_ports=[], trunk_id=5b1dd84a-69f3-4e17-8604-49965c03b89c, updated_at=2025-12-02T10:03:13Z on network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:03:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:03:14 localhost podman[308564]: 2025-12-02 10:03:14.456758347 +0000 UTC m=+0.090018297 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 05:03:14 localhost podman[308564]: 2025-12-02 10:03:14.486911586 +0000 UTC m=+0.120171536 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:14 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.511 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:14 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 2 addresses Dec 2 05:03:14 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:03:14 localhost podman[308573]: 2025-12-02 10:03:14.521384241 +0000 UTC m=+0.122960900 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:03:14 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.589 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.590 281858 DEBUG nova.virt.images [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] d85e840d-fa56-497b-b5bd-b49584d3e97a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.591 281858 DEBUG nova.privsep.utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.591 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.674 281858 DEBUG nova.network.neutron [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Successfully updated port: 31de197b-ef56-4d2a-9fa2-293715a60004 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.694 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.694 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.695 281858 DEBUG nova.network.neutron [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:03:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:14.769 263406 INFO neutron.agent.dhcp.agent [None req-693c441f-0fd4-4c3e-9741-627c215fdfa7 - - - - - -] DHCP configuration for ports {'31de197b-ef56-4d2a-9fa2-293715a60004'} is completed#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.781 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.784 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.814 281858 DEBUG nova.network.neutron [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.819 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.856 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.857 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 1.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.881 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:14 localhost nova_compute[281854]: 2025-12-02 10:03:14.884 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 63092ab0-9432-4c74-933e-e9d5428e6162_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:14.944 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:13Z, description=, device_id=c633bc2a-d8d8-4d52-951c-727821eef4f5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e956d78f-d33b-49fb-a452-eaed9391e7d2, ip_allocation=immediate, mac_address=fa:16:3e:54:ce:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:06Z, description=, dns_domain=, id=13bbad22-ab61-4b1f-849e-c651aa8f3297, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1859087569-network, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25848, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=342, status=ACTIVE, subnets=['a62c0502-5155-4c20-aaad-4cc8bce976da'], tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:03:07Z, vlan_transparent=None, network_id=13bbad22-ab61-4b1f-849e-c651aa8f3297, port_security_enabled=False, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=411, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:03:14Z on network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.187 281858 DEBUG nova.network.neutron [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.265 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.266 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance network_info: |[{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m Dec 2 05:03:15 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 1 addresses Dec 2 05:03:15 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:03:15 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:03:15 localhost systemd[1]: tmp-crun.ZoLt0o.mount: Deactivated successfully. Dec 2 05:03:15 localhost podman[308672]: 2025-12-02 10:03:15.356786889 +0000 UTC m=+0.070513383 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.356 281858 DEBUG nova.compute.manager [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-changed-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.357 281858 DEBUG nova.compute.manager [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Refreshing instance network info cache due to event network-changed-31de197b-ef56-4d2a-9fa2-293715a60004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.357 281858 DEBUG oslo_concurrency.lockutils [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.358 281858 DEBUG oslo_concurrency.lockutils [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.358 281858 DEBUG nova.network.neutron [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Refreshing network info cache for port 31de197b-ef56-4d2a-9fa2-293715a60004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.611 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 63092ab0-9432-4c74-933e-e9d5428e6162_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.713 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] resizing rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Dec 2 05:03:15 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:15.850 263406 INFO neutron.agent.dhcp.agent [None req-a4b435d7-c0b1-455d-8992-3323068c46ee - - - - - -] DHCP configuration for ports {'e956d78f-d33b-49fb-a452-eaed9391e7d2'} is completed#033[00m Dec 2 05:03:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.885 281858 DEBUG nova.objects.instance [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lazy-loading 'migration_context' on Instance uuid 63092ab0-9432-4c74-933e-e9d5428e6162 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.908 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.909 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Ensure instance console log exists: /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.909 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.910 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.910 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.914 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start _get_guest_xml network_info=[{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.920 281858 WARNING nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.923 281858 DEBUG nova.virt.libvirt.host [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.924 281858 DEBUG nova.virt.libvirt.host [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.926 281858 DEBUG nova.virt.libvirt.host [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.926 281858 DEBUG nova.virt.libvirt.host [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.927 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.927 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-12-02T10:01:55Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.928 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.929 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.929 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.929 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.930 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.930 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.931 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.931 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.931 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.932 281858 DEBUG nova.virt.hardware [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:03:15 localhost nova_compute[281854]: 2025-12-02 10:03:15.937 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:03:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:03:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:03:16 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3290176110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.440 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:16 localhost podman[308787]: 2025-12-02 10:03:16.451332761 +0000 UTC m=+0.090765097 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:03:16 localhost podman[308787]: 2025-12-02 10:03:16.467099183 +0000 UTC m=+0.106531509 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:03:16 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.484 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.489 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.508 281858 DEBUG nova.network.neutron [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updated VIF entry in instance network info cache for port 31de197b-ef56-4d2a-9fa2-293715a60004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.509 281858 DEBUG nova.network.neutron [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:16 localhost podman[308786]: 2025-12-02 10:03:16.422755053 +0000 UTC m=+0.066380081 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container) Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.531 281858 DEBUG oslo_concurrency.lockutils [req-f549a86e-f608-422e-9010-2a1262a2e085 req-c8977d39-15cb-4501-941e-9b4a06d961ec dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:16 localhost podman[308786]: 2025-12-02 10:03:16.556012709 +0000 UTC m=+0.199637737 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7) Dec 2 05:03:16 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:03:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:03:16 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4171051508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.915 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.918 281858 DEBUG nova.virt.libvirt.vif [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:03:13Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.919 281858 DEBUG nova.network.os_vif_util [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.920 281858 DEBUG nova.network.os_vif_util [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:16 localhost nova_compute[281854]: 2025-12-02 10:03:16.923 281858 DEBUG nova.objects.instance [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lazy-loading 'pci_devices' on Instance uuid 63092ab0-9432-4c74-933e-e9d5428e6162 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.885 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] End _get_guest_xml xml= Dec 2 05:03:17 localhost nova_compute[281854]: 63092ab0-9432-4c74-933e-e9d5428e6162 Dec 2 05:03:17 localhost nova_compute[281854]: instance-00000007 Dec 2 05:03:17 localhost nova_compute[281854]: 131072 Dec 2 05:03:17 localhost nova_compute[281854]: 1 Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: tempest-LiveAutoBlockMigrationV225Test-server-861747463 Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:15 Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: 128 Dec 2 05:03:17 localhost nova_compute[281854]: 1 Dec 2 05:03:17 localhost nova_compute[281854]: 0 Dec 2 05:03:17 localhost nova_compute[281854]: 0 Dec 2 05:03:17 localhost nova_compute[281854]: 1 Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: tempest-LiveAutoBlockMigrationV225Test-5814605-project-member Dec 2 05:03:17 localhost nova_compute[281854]: tempest-LiveAutoBlockMigrationV225Test-5814605 Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: RDO Dec 2 05:03:17 localhost nova_compute[281854]: OpenStack Compute Dec 2 05:03:17 localhost nova_compute[281854]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:03:17 localhost nova_compute[281854]: 63092ab0-9432-4c74-933e-e9d5428e6162 Dec 2 05:03:17 localhost nova_compute[281854]: 63092ab0-9432-4c74-933e-e9d5428e6162 Dec 2 05:03:17 localhost nova_compute[281854]: Virtual Machine Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: hvm Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: /dev/urandom Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: Dec 2 05:03:17 localhost nova_compute[281854]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.890 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Preparing to wait for external event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.890 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.891 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.892 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.894 281858 DEBUG nova.virt.libvirt.vif [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:03:13Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.894 281858 DEBUG nova.network.os_vif_util [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.896 281858 DEBUG nova.network.os_vif_util [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.897 281858 DEBUG os_vif [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.898 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.899 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.900 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.905 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.906 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31de197b-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.907 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31de197b-ef, col_values=(('external_ids', {'iface-id': '31de197b-ef56-4d2a-9fa2-293715a60004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:bb:bd', 'vm-uuid': '63092ab0-9432-4c74-933e-e9d5428e6162'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.915 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.918 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:17 localhost nova_compute[281854]: 2025-12-02 10:03:17.919 281858 INFO os_vif [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')#033[00m Dec 2 05:03:18 localhost nova_compute[281854]: 2025-12-02 10:03:18.086 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:03:18 localhost nova_compute[281854]: 2025-12-02 10:03:18.087 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:03:18 localhost nova_compute[281854]: 2025-12-02 10:03:18.087 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] No VIF found with MAC fa:16:3e:8f:bb:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m Dec 2 05:03:18 localhost nova_compute[281854]: 2025-12-02 10:03:18.088 281858 INFO nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Using config drive#033[00m Dec 2 05:03:18 localhost nova_compute[281854]: 2025-12-02 10:03:18.131 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:18 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:03:18 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.012 281858 INFO nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating config drive at /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.019 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpta2cs2cy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.149 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpta2cs2cy" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.201 281858 DEBUG nova.storage.rbd_utils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] rbd image 63092ab0-9432-4c74-933e-e9d5428e6162_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.206 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config 63092ab0-9432-4c74-933e-e9d5428e6162_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.433 281858 DEBUG oslo_concurrency.processutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config 63092ab0-9432-4c74-933e-e9d5428e6162_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.434 281858 INFO nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deleting local config drive /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/disk.config because it was imported into RBD.#033[00m Dec 2 05:03:19 localhost systemd[1]: Started libvirt secret daemon. Dec 2 05:03:19 localhost kernel: device tap31de197b-ef entered promiscuous mode Dec 2 05:03:19 localhost NetworkManager[5965]: [1764669799.5414] manager: (tap31de197b-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/21) Dec 2 05:03:19 localhost ovn_controller[154505]: 2025-12-02T10:03:19Z|00094|binding|INFO|Claiming lport 31de197b-ef56-4d2a-9fa2-293715a60004 for this chassis. Dec 2 05:03:19 localhost ovn_controller[154505]: 2025-12-02T10:03:19Z|00095|binding|INFO|31de197b-ef56-4d2a-9fa2-293715a60004: Claiming fa:16:3e:8f:bb:bd 10.100.0.4 Dec 2 05:03:19 localhost ovn_controller[154505]: 2025-12-02T10:03:19Z|00096|binding|INFO|Claiming lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 for this chassis. Dec 2 05:03:19 localhost ovn_controller[154505]: 2025-12-02T10:03:19Z|00097|binding|INFO|40590dd1-9250-4409-a2d0-cd4f4774bfc8: Claiming fa:16:3e:51:01:78 19.80.0.123 Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.546 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:19 localhost systemd-udevd[309044]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:19 localhost ovn_controller[154505]: 2025-12-02T10:03:19Z|00098|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 ovn-installed in OVS Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.560 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.562 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:19 localhost NetworkManager[5965]: [1764669799.5701] device (tap31de197b-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:03:19 localhost NetworkManager[5965]: [1764669799.5708] device (tap31de197b-ef): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:03:19 localhost systemd-machined[84262]: New machine qemu-3-instance-00000007. Dec 2 05:03:19 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000007. Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.857 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.925 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:19 localhost nova_compute[281854]: 2025-12-02 10:03:19.925 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Started (Lifecycle Event)#033[00m Dec 2 05:03:20 localhost ovn_controller[154505]: 2025-12-02T10:03:20Z|00099|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 up in Southbound Dec 2 05:03:20 localhost ovn_controller[154505]: 2025-12-02T10:03:20Z|00100|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 up in Southbound Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.500 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.503 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.505 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f bound to our chassis#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.508 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port b22990f2-0db4-407c-a5b6-65e7991152d1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.508 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.519 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4cebe5-de32-4d37-bf99-7236371f5ec9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.520 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62df5f27-c1 in ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.523 160340 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62df5f27-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.523 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[806350cd-6559-4eae-8a97-f295bbf61dff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.524 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ceef0be1-80cb-4d64-b23e-b72d9dfda56e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.536 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fc5f82-7ac1-4149-b45f-2bbfaad4e413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.551 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c6acb99b-6351-4904-8f14-c12fda713ac3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.579 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[67019f57-b575-4bc7-a1c9-b5abf52447b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.586 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[42f90b64-6ffb-48c0-bb98-298adbaff0ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost NetworkManager[5965]: [1764669800.5875] manager: (tap62df5f27-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/22) Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.620 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa63576-fc78-401d-8323-c4e776e26501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.625 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[34b01d2b-6976-4352-bfe2-8980825fc81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c1: link becomes ready Dec 2 05:03:20 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c0: link becomes ready Dec 2 05:03:20 localhost NetworkManager[5965]: [1764669800.6540] device (tap62df5f27-c0): carrier: link connected Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.660 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6b2f52-874e-4887-99cb-8e729ab32ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.677 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5b56ad1c-c3fc-4754-aa3a-22cc146f291a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196280, 'reachable_time': 26916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309123, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.696 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b83b9017-f167-4b73-a4bd-6f55145bc071]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:df9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1196280, 'tstamp': 1196280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309124, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.707 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.713 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.714 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Paused (Lifecycle Event)#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.715 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6960b323-9a0a-4040-88b3-4950e676f025]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196280, 'reachable_time': 26916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309125, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.745 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[747ed6c1-8065-4515-929a-3729ff486855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.800 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f3450662-2baf-4490-ab5c-3b87cd5a1191]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.802 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.802 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.803 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62df5f27-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.803 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:20 localhost kernel: device tap62df5f27-c0 entered promiscuous mode Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.806 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.809 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.810 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62df5f27-c0, col_values=(('external_ids', {'iface-id': 'ea045be8-e121-4ff5-bb82-2a757b7ce736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:20 localhost ovn_controller[154505]: 2025-12-02T10:03:20Z|00101|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.812 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.815 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.822 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.823 160221 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.824 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[bd666bd3-a5c9-4d25-9cdb-e6e09ed04a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.825 160221 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: global Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: log /dev/log local0 debug Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: log-tag haproxy-metadata-proxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: user root Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: group root Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: maxconn 1024 Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: pidfile /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: daemon Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: defaults Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: log global Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: mode http Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: option httplog Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: option dontlognull Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: option http-server-close Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: option forwardfor Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: retries 3 Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: timeout http-request 30s Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: timeout connect 30s Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: timeout client 32s Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: timeout server 32s Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: timeout http-keep-alive 30s Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: listen listener Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: bind 169.254.169.254:80 Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: http-request add-header X-OVN-Network-ID 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:03:20 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:20.826 160221 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'env', 'PROCESS_TAG=haproxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:03:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:20 localhost nova_compute[281854]: 2025-12-02 10:03:20.870 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.078 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:21.346 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:13Z, description=, device_id=c633bc2a-d8d8-4d52-951c-727821eef4f5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e956d78f-d33b-49fb-a452-eaed9391e7d2, ip_allocation=immediate, mac_address=fa:16:3e:54:ce:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:06Z, description=, dns_domain=, id=13bbad22-ab61-4b1f-849e-c651aa8f3297, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1859087569-network, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25848, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=342, status=ACTIVE, subnets=['a62c0502-5155-4c20-aaad-4cc8bce976da'], tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:03:07Z, vlan_transparent=None, network_id=13bbad22-ab61-4b1f-849e-c651aa8f3297, port_security_enabled=False, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=411, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:03:14Z on network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:03:21 localhost podman[309158]: Dec 2 05:03:21 localhost podman[309158]: 2025-12-02 10:03:21.371991104 +0000 UTC m=+0.098970437 container create fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:03:21 localhost systemd[1]: Started libpod-conmon-fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03.scope. Dec 2 05:03:21 localhost podman[309158]: 2025-12-02 10:03:21.321864508 +0000 UTC m=+0.048844322 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:03:21 localhost systemd[1]: Started libcrun container. Dec 2 05:03:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e44ad48351df4bc5cf9273b4853724ba68f5d6925b7196bceece1b80907f57/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:21 localhost podman[309158]: 2025-12-02 10:03:21.474353671 +0000 UTC m=+0.201333014 container init fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:21 localhost podman[309158]: 2025-12-02 10:03:21.48478618 +0000 UTC m=+0.211765523 container start fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:03:21 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [NOTICE] (309185) : New worker (309190) forked Dec 2 05:03:21 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [NOTICE] (309185) : Loading success. Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.546 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.551 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 50e76764-b6f4-47d9-9fe0-99e7b5813c75 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.551 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3673812c-f461-4e86-831f-b7a7821f4bda#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.559 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[408c3dd7-de6c-45de-9396-ba0244090ed2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.560 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3673812c-f1 in ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.562 160340 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3673812c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.562 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[17b97fa5-5515-4206-994c-5fc3cfa05ce6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.563 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[72a13be5-0734-4b3a-bdc8-a7209988e446]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.574 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[033a0d74-28f9-4b94-aae9-130414101301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.587 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9b566ed3-a7a9-4694-a311-3497afd34049]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.603 281858 DEBUG nova.compute.manager [req-d4c037f9-67dc-4a02-94d2-2aaebd6fcf63 req-10a287b8-19a5-41a5-af53-a531ec46647f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.603 281858 DEBUG oslo_concurrency.lockutils [req-d4c037f9-67dc-4a02-94d2-2aaebd6fcf63 req-10a287b8-19a5-41a5-af53-a531ec46647f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.606 281858 DEBUG oslo_concurrency.lockutils [req-d4c037f9-67dc-4a02-94d2-2aaebd6fcf63 req-10a287b8-19a5-41a5-af53-a531ec46647f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.606 281858 DEBUG oslo_concurrency.lockutils [req-d4c037f9-67dc-4a02-94d2-2aaebd6fcf63 req-10a287b8-19a5-41a5-af53-a531ec46647f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.607 281858 DEBUG nova.compute.manager [req-d4c037f9-67dc-4a02-94d2-2aaebd6fcf63 req-10a287b8-19a5-41a5-af53-a531ec46647f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Processing event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.607 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.612 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.615 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.615 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.617 281858 INFO nova.virt.libvirt.driver [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance spawned successfully.#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.617 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.617 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9a748b-9912-4f33-afe1-4d3dc23bf593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.623 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[dce52d7c-abc6-4b52-ac3e-9ef0ffd3a82a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost NetworkManager[5965]: [1764669801.6249] manager: (tap3673812c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/23) Dec 2 05:03:21 localhost systemd-udevd[309110]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:21 localhost podman[309203]: 2025-12-02 10:03:21.640099449 +0000 UTC m=+0.079835964 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:03:21 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 1 addresses Dec 2 05:03:21 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:03:21 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.648 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.654 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.655 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.655 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.656 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.656 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.657 281858 DEBUG nova.virt.libvirt.driver [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.661 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.662 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[59a0beef-f567-4298-80b5-d5576f8e438b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.666 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[cc51efb9-65f2-4b62-8ccd-f5a36a16d235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost NetworkManager[5965]: [1764669801.6895] device (tap3673812c-f0): carrier: link connected Dec 2 05:03:21 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f1: link becomes ready Dec 2 05:03:21 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f0: link becomes ready Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.690 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.695 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[57a60cc0-7558-4482-b792-54a4b40de722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.714 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fed0bc11-6d8f-4fdc-a3a4-6f997f8eb4fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196383, 'reachable_time': 44383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309229, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.719 281858 INFO nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 8.14 seconds to spawn the instance on the hypervisor.#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.720 281858 DEBUG nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.730 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[619de326-974d-4695-adcc-664020a969e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:13c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1196383, 'tstamp': 1196383}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309231, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.743 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[984ac655-9156-4709-bc86-dd8060ca3e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196383, 'reachable_time': 44383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309234, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.770 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[447801d1-36ce-4f7a-9fd1-981cbc8a7008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.788 281858 INFO nova.compute.manager [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 9.01 seconds to build instance.#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.818 281858 DEBUG oslo_concurrency.lockutils [None req-3fb4799c-0c3c-4bc7-b583-6f253e51fc78 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 9.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.822 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d63ed11a-57f3-4a53-923c-fe48a0d59675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.823 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.823 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.824 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3673812c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.826 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:21 localhost kernel: device tap3673812c-f0 entered promiscuous mode Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.830 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3673812c-f0, col_values=(('external_ids', {'iface-id': 'ba8757f7-1076-4bc0-8968-1084ffa48766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:21 localhost ovn_controller[154505]: 2025-12-02T10:03:21Z|00102|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:21 localhost nova_compute[281854]: 2025-12-02 10:03:21.842 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.843 160221 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.844 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f3888fd0-2042-4227-a9b8-3b53bd19d5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.845 160221 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: global Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: log /dev/log local0 debug Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: log-tag haproxy-metadata-proxy-3673812c-f461-4e86-831f-b7a7821f4bda Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: user root Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: group root Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: maxconn 1024 Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: pidfile /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: daemon Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: defaults Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: log global Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: mode http Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: option httplog Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: option dontlognull Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: option http-server-close Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: option forwardfor Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: retries 3 Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: timeout http-request 30s Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: timeout connect 30s Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: timeout client 32s Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: timeout server 32s Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: timeout http-keep-alive 30s Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: listen listener Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: bind 169.254.169.254:80 Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: http-request add-header X-OVN-Network-ID 3673812c-f461-4e86-831f-b7a7821f4bda Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:03:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:21.846 160221 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'env', 'PROCESS_TAG=haproxy-3673812c-f461-4e86-831f-b7a7821f4bda', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3673812c-f461-4e86-831f-b7a7821f4bda.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:03:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:21.874 263406 INFO neutron.agent.dhcp.agent [None req-4f4b2c56-0164-4e20-86ef-6834063a9380 - - - - - -] DHCP configuration for ports {'e956d78f-d33b-49fb-a452-eaed9391e7d2'} is completed#033[00m Dec 2 05:03:22 localhost podman[309269]: Dec 2 05:03:22 localhost podman[309269]: 2025-12-02 10:03:22.219804635 +0000 UTC m=+0.075112997 container create 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:03:22 localhost systemd[1]: Started libpod-conmon-5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9.scope. Dec 2 05:03:22 localhost systemd[1]: Started libcrun container. Dec 2 05:03:22 localhost podman[309269]: 2025-12-02 10:03:22.183074889 +0000 UTC m=+0.038383281 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:03:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a803b833a5ff5c579638b33681dc94f55bac1f087c0b32e1bc859addffd561/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:22 localhost podman[309269]: 2025-12-02 10:03:22.299069842 +0000 UTC m=+0.154378214 container init 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:03:22 localhost podman[309269]: 2025-12-02 10:03:22.315754609 +0000 UTC m=+0.171062981 container start 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:03:22 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [NOTICE] (309287) : New worker (309289) forked Dec 2 05:03:22 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [NOTICE] (309287) : Loading success. Dec 2 05:03:22 localhost nova_compute[281854]: 2025-12-02 10:03:22.911 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:23 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:03:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:03:23 localhost podman[309298]: 2025-12-02 10:03:23.495829366 +0000 UTC m=+0.131160411 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:03:23 localhost podman[309298]: 2025-12-02 10:03:23.536406345 +0000 UTC m=+0.171737440 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:03:23 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.748 281858 DEBUG nova.compute.manager [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.749 281858 DEBUG oslo_concurrency.lockutils [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.749 281858 DEBUG oslo_concurrency.lockutils [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.750 281858 DEBUG oslo_concurrency.lockutils [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.751 281858 DEBUG nova.compute.manager [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:23 localhost nova_compute[281854]: 2025-12-02 10:03:23.751 281858 WARNING nova.compute.manager [req-15163fd7-a138-43dd-9fd2-bf93b56ede75 req-b57662d3-f756-462f-a04f-bca3192e809a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state None.#033[00m Dec 2 05:03:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e97 e97: 6 total, 6 up, 6 in Dec 2 05:03:24 localhost nova_compute[281854]: 2025-12-02 10:03:24.076 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:24 localhost nova_compute[281854]: 2025-12-02 10:03:24.898 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e98 e98: 6 total, 6 up, 6 in Dec 2 05:03:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e99 e99: 6 total, 6 up, 6 in Dec 2 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:03:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:03:27 localhost podman[309317]: 2025-12-02 10:03:27.450497178 +0000 UTC m=+0.090560251 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:03:27 localhost podman[309317]: 2025-12-02 10:03:27.458468632 +0000 UTC m=+0.098531745 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:03:27 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:03:27 localhost podman[309318]: 2025-12-02 10:03:27.506146111 +0000 UTC m=+0.144024746 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:03:27 localhost podman[309318]: 2025-12-02 10:03:27.543022921 +0000 UTC m=+0.180901586 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:03:27 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:03:27 localhost nova_compute[281854]: 2025-12-02 10:03:27.913 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:29 localhost nova_compute[281854]: 2025-12-02 10:03:29.790 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Check if temp file /var/lib/nova/instances/tmp6m2ihysk exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m Dec 2 05:03:29 localhost nova_compute[281854]: 2025-12-02 10:03:29.791 281858 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] source check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m Dec 2 05:03:29 localhost nova_compute[281854]: 2025-12-02 10:03:29.902 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e100 e100: 6 total, 6 up, 6 in Dec 2 05:03:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:32.665 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:32 localhost nova_compute[281854]: 2025-12-02 10:03:32.666 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:32.668 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:03:32 localhost nova_compute[281854]: 2025-12-02 10:03:32.915 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:34 localhost openstack_network_exporter[242845]: ERROR 10:03:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:34 localhost openstack_network_exporter[242845]: ERROR 10:03:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:03:34 localhost openstack_network_exporter[242845]: ERROR 10:03:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:03:34 localhost openstack_network_exporter[242845]: ERROR 10:03:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:03:34 localhost openstack_network_exporter[242845]: Dec 2 05:03:34 localhost openstack_network_exporter[242845]: ERROR 10:03:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:03:34 localhost openstack_network_exporter[242845]: Dec 2 05:03:34 localhost nova_compute[281854]: 2025-12-02 10:03:34.953 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:35 localhost nova_compute[281854]: 2025-12-02 10:03:35.087 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:35 localhost nova_compute[281854]: 2025-12-02 10:03:35.087 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:35 localhost nova_compute[281854]: 2025-12-02 10:03:35.096 281858 INFO nova.compute.rpcapi [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Dec 2 05:03:35 localhost nova_compute[281854]: 2025-12-02 10:03:35.096 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:35 localhost nova_compute[281854]: 2025-12-02 10:03:35.296 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:36 localhost podman[240799]: time="2025-12-02T10:03:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:03:36 localhost podman[240799]: @ - - [02/Dec/2025:10:03:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162127 "" "Go-http-client/1.1" Dec 2 05:03:36 localhost podman[240799]: @ - - [02/Dec/2025:10:03:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21144 "" "Go-http-client/1.1" Dec 2 05:03:36 localhost ovn_controller[154505]: 2025-12-02T10:03:36Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:bb:bd 10.100.0.4 Dec 2 05:03:36 localhost ovn_controller[154505]: 2025-12-02T10:03:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:bb:bd 10.100.0.4 Dec 2 05:03:37 localhost ovn_controller[154505]: 2025-12-02T10:03:37Z|00103|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:03:37 localhost ovn_controller[154505]: 2025-12-02T10:03:37Z|00104|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:37 localhost ovn_controller[154505]: 2025-12-02T10:03:37Z|00105|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:37 localhost nova_compute[281854]: 2025-12-02 10:03:37.788 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:37 localhost nova_compute[281854]: 2025-12-02 10:03:37.917 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:38.671 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:39 localhost nova_compute[281854]: 2025-12-02 10:03:39.955 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.128 281858 DEBUG nova.compute.manager [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.129 281858 DEBUG oslo_concurrency.lockutils [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.129 281858 DEBUG oslo_concurrency.lockutils [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.130 281858 DEBUG oslo_concurrency.lockutils [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.130 281858 DEBUG nova.compute.manager [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.131 281858 DEBUG nova.compute.manager [req-82284151-90b9-4cda-944e-fe6cb016570e req-f0e02f47-5e48-434e-bdec-c75cdac73624 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:03:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:03:40 localhost podman[309365]: 2025-12-02 10:03:40.465454069 +0000 UTC m=+0.098555796 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:03:40 localhost podman[309365]: 2025-12-02 10:03:40.477354448 +0000 UTC m=+0.110456205 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:40 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:03:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:40.546 263406 INFO neutron.agent.linux.ip_lib [None req-d56a0314-fd8f-4bd2-ae84-88824e1313ec - - - - - -] Device tapae9b1151-59 cannot be used as it has no MAC address#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.571 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost kernel: device tapae9b1151-59 entered promiscuous mode Dec 2 05:03:40 localhost NetworkManager[5965]: [1764669820.5774] manager: (tapae9b1151-59): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Dec 2 05:03:40 localhost systemd-udevd[309395]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.579 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost ovn_controller[154505]: 2025-12-02T10:03:40Z|00106|binding|INFO|Claiming lport ae9b1151-5912-406f-ae7b-9db37b471685 for this chassis. Dec 2 05:03:40 localhost ovn_controller[154505]: 2025-12-02T10:03:40Z|00107|binding|INFO|ae9b1151-5912-406f-ae7b-9db37b471685: Claiming unknown Dec 2 05:03:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:40.590 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-97ae066a-ecdb-4d1f-a021-787e342a02a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97ae066a-ecdb-4d1f-a021-787e342a02a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1edab5ae5d43f08b967b5bf594f8b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5764aa57-a87d-4e3f-89b1-49a48ee4f883, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ae9b1151-5912-406f-ae7b-9db37b471685) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:40.591 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ae9b1151-5912-406f-ae7b-9db37b471685 in datapath 97ae066a-ecdb-4d1f-a021-787e342a02a4 bound to our chassis#033[00m Dec 2 05:03:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:40.592 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 97ae066a-ecdb-4d1f-a021-787e342a02a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:03:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:40.593 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[af19e651-a41c-4bed-b631-d522ab2033a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.613 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost ovn_controller[154505]: 2025-12-02T10:03:40Z|00108|binding|INFO|Setting lport ae9b1151-5912-406f-ae7b-9db37b471685 ovn-installed in OVS Dec 2 05:03:40 localhost ovn_controller[154505]: 2025-12-02T10:03:40Z|00109|binding|INFO|Setting lport ae9b1151-5912-406f-ae7b-9db37b471685 up in Southbound Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.618 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.647 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost nova_compute[281854]: 2025-12-02 10:03:40.673 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:41 localhost nova_compute[281854]: 2025-12-02 10:03:41.103 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:41 localhost podman[309451]: Dec 2 05:03:41 localhost podman[309451]: 2025-12-02 10:03:41.554948535 +0000 UTC m=+0.069391673 container create 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:03:41 localhost systemd[1]: Started libpod-conmon-2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d.scope. Dec 2 05:03:41 localhost systemd[1]: Started libcrun container. Dec 2 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69386878a877368468586813b3dbb1937ee49b0390efbec5dd7e4f609902381/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:03:41 localhost podman[309451]: 2025-12-02 10:03:41.521309272 +0000 UTC m=+0.035752450 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:03:41 localhost podman[309451]: 2025-12-02 10:03:41.629590977 +0000 UTC m=+0.144034195 container init 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:03:41 localhost podman[309451]: 2025-12-02 10:03:41.639825282 +0000 UTC m=+0.154268490 container start 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:03:41 localhost dnsmasq[309469]: started, version 2.85 cachesize 150 Dec 2 05:03:41 localhost dnsmasq[309469]: DNS service limited to local subnets Dec 2 05:03:41 localhost dnsmasq[309469]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:03:41 localhost dnsmasq[309469]: warning: no upstream servers configured Dec 2 05:03:41 localhost dnsmasq-dhcp[309469]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:03:41 localhost dnsmasq[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/addn_hosts - 0 addresses Dec 2 05:03:41 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/host Dec 2 05:03:41 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/opts Dec 2 05:03:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:41.762 263406 INFO neutron.agent.dhcp.agent [None req-4c7df868-1e59-42ac-b35b-a3680cf97ca5 - - - - - -] DHCP configuration for ports {'a5732653-8ec3-490d-92c0-40205764cb6c'} is completed#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.072 281858 INFO nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 6.98 seconds for pre_live_migration on destination host np0005541914.localdomain.#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.072 281858 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.098 281858 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(899d75d5-bebe-4551-8a0f-b0309584472e),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.102 281858 DEBUG nova.objects.instance [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lazy-loading 'migration_context' on Instance uuid 63092ab0-9432-4c74-933e-e9d5428e6162 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.104 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.106 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.106 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.125 281858 DEBUG nova.virt.libvirt.vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:03:21Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.126 281858 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.127 281858 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.128 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating guest XML with vif config: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: Dec 2 05:03:42 localhost nova_compute[281854]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.129 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.173 281858 DEBUG nova.compute.manager [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.174 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.174 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.175 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.175 281858 DEBUG nova.compute.manager [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.176 281858 WARNING nova.compute.manager [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state migrating.#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.176 281858 DEBUG nova.compute.manager [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-changed-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.177 281858 DEBUG nova.compute.manager [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Refreshing instance network info cache due to event network-changed-31de197b-ef56-4d2a-9fa2-293715a60004. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.177 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.178 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.178 281858 DEBUG nova.network.neutron [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Refreshing network info cache for port 31de197b-ef56-4d2a-9fa2-293715a60004 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m Dec 2 05:03:42 localhost systemd[1]: tmp-crun.COZjxR.mount: Deactivated successfully. Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.609 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.610 281858 INFO nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.714 281858 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.879 281858 DEBUG nova.network.neutron [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updated VIF entry in instance network info cache for port 31de197b-ef56-4d2a-9fa2-293715a60004. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.880 281858 DEBUG nova.network.neutron [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005541914.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.902 281858 DEBUG oslo_concurrency.lockutils [req-6f4549b1-77b1-4e2e-936e-d8b6b9611067 req-eba3de69-6cf9-4e7e-b7d3-2c46efce0508 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:42 localhost nova_compute[281854]: 2025-12-02 10:03:42.920 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.218 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.219 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.724 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.725 281858 DEBUG nova.virt.libvirt.migration [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.786 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.787 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Paused (Lifecycle Event)#033[00m Dec 2 05:03:43 localhost kernel: device tap31de197b-ef left promiscuous mode Dec 2 05:03:43 localhost NetworkManager[5965]: [1764669823.9418] device (tap31de197b-ef): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:03:43 localhost ovn_controller[154505]: 2025-12-02T10:03:43Z|00110|binding|INFO|Releasing lport 31de197b-ef56-4d2a-9fa2-293715a60004 from this chassis (sb_readonly=0) Dec 2 05:03:43 localhost ovn_controller[154505]: 2025-12-02T10:03:43Z|00111|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 down in Southbound Dec 2 05:03:43 localhost ovn_controller[154505]: 2025-12-02T10:03:43Z|00112|binding|INFO|Releasing lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 from this chassis (sb_readonly=0) Dec 2 05:03:43 localhost ovn_controller[154505]: 2025-12-02T10:03:43Z|00113|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 down in Southbound Dec 2 05:03:43 localhost ovn_controller[154505]: 2025-12-02T10:03:43Z|00114|binding|INFO|Removing iface tap31de197b-ef ovn-installed in OVS Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.955 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.959 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost nova_compute[281854]: 2025-12-02 10:03:43.978 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:43 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully. Dec 2 05:03:43 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 15.208s CPU time. Dec 2 05:03:44 localhost systemd-machined[84262]: Machine qemu-3-instance-00000007 terminated. Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.013 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:44 localhost ovn_controller[154505]: 2025-12-02T10:03:44Z|00115|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:03:44 localhost ovn_controller[154505]: 2025-12-02T10:03:44Z|00116|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0) Dec 2 05:03:44 localhost ovn_controller[154505]: 2025-12-02T10:03:44Z|00117|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0) Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.017 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain,np0005541914.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '515e0717-8baa-40e6-ac30-5fb148626504'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.019 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.021 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f unbound from our chassis#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.023 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port b22990f2-0db4-407c-a5b6-65e7991152d1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.023 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.024 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ecad38a3-d08d-4dd4-877e-1d9f5c2cec85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.025 160221 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace which is not needed anymore#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.059 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost journal[203664]: cannot parse process status data Dec 2 05:03:44 localhost journal[203664]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/63092ab0-9432-4c74-933e-e9d5428e6162_disk: No such file or directory Dec 2 05:03:44 localhost journal[203664]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/63092ab0-9432-4c74-933e-e9d5428e6162_disk: No such file or directory Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.103 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.109 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.131 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.133 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.134 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [NOTICE] (309185) : haproxy version is 2.8.14-c23fe91 Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [NOTICE] (309185) : path to executable is /usr/sbin/haproxy Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [WARNING] (309185) : Exiting Master process... Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [WARNING] (309185) : Exiting Master process... Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.230 281858 DEBUG nova.virt.libvirt.guest [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '63092ab0-9432-4c74-933e-e9d5428e6162' (instance-00000007) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.232 281858 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migration operation has completed#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.232 281858 INFO nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] _post_live_migration() is started..#033[00m Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [ALERT] (309185) : Current worker (309190) exited with code 143 (Terminated) Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[309172]: [WARNING] (309185) : All workers exited. Exiting... (0) Dec 2 05:03:44 localhost systemd[1]: libpod-fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03.scope: Deactivated successfully. Dec 2 05:03:44 localhost podman[309507]: 2025-12-02 10:03:44.240000298 +0000 UTC m=+0.081774095 container died fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:44 localhost podman[309507]: 2025-12-02 10:03:44.343482375 +0000 UTC m=+0.185256142 container cleanup fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:03:44 localhost podman[309521]: 2025-12-02 10:03:44.355576189 +0000 UTC m=+0.105442960 container cleanup fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:03:44 localhost systemd[1]: libpod-conmon-fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03.scope: Deactivated successfully. Dec 2 05:03:44 localhost podman[309538]: 2025-12-02 10:03:44.438987407 +0000 UTC m=+0.076277807 container remove fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.443 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1b76e555-aa6f-4314-bc20-0d09aff10df4]: (4, ('Tue Dec 2 10:03:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03)\nfd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03\nTue Dec 2 10:03:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (fd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03)\nfd5e78e90e94a2686f4ae163d9f6dd4308c2dd49de37db63e43a9097c2945e03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.445 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab0960b-e91f-48cd-9c4c-a9041e8030c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.446 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.450 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost kernel: device tap62df5f27-c0 left promiscuous mode Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.460 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.463 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc5e43e-6305-4199-85b6-70517aa35c0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.481 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6274bafd-4027-4203-aa7a-b1ae288b84fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.482 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[86a316e6-d5a4-465c-b358-d07116563b34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.501 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[aac0fc88-39c8-4502-b8ff-9b244f28504d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196272, 'reachable_time': 25541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309558, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.504 160371 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.504 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[ee81ca3b-eb9c-465e-96d5-d4f02f31194e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.505 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.511 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 50e76764-b6f4-47d9-9fe0-99e7b5813c75 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.512 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3673812c-f461-4e86-831f-b7a7821f4bda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.513 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8dd957-b8b6-46e9-a62a-91c95eb94abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.513 160221 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace which is not needed anymore#033[00m Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [NOTICE] (309287) : haproxy version is 2.8.14-c23fe91 Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [NOTICE] (309287) : path to executable is /usr/sbin/haproxy Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [WARNING] (309287) : Exiting Master process... Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [WARNING] (309287) : Exiting Master process... Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [ALERT] (309287) : Current worker (309289) exited with code 143 (Terminated) Dec 2 05:03:44 localhost neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[309283]: [WARNING] (309287) : All workers exited. Exiting... (0) Dec 2 05:03:44 localhost systemd[1]: libpod-5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9.scope: Deactivated successfully. Dec 2 05:03:44 localhost podman[309576]: 2025-12-02 10:03:44.70627115 +0000 UTC m=+0.077480640 container died 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.720 281858 DEBUG nova.compute.manager [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.721 281858 DEBUG oslo_concurrency.lockutils [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.722 281858 DEBUG oslo_concurrency.lockutils [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.722 281858 DEBUG oslo_concurrency.lockutils [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.722 281858 DEBUG nova.compute.manager [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.723 281858 DEBUG nova.compute.manager [req-da1fcdb2-c18f-4ab1-a3bf-a1162073ecf2 req-ab0eabc9-9457-445f-97d2-62861d27e746 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m Dec 2 05:03:44 localhost podman[309576]: 2025-12-02 10:03:44.738833394 +0000 UTC m=+0.110042814 container cleanup 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:03:44 localhost podman[309610]: 2025-12-02 10:03:44.829320052 +0000 UTC m=+0.067842892 container remove 5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.833 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2415a355-e0ce-459d-8e0d-0f4ffb4beaa4]: (4, ('Tue Dec 2 10:03:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9)\n5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9\nTue Dec 2 10:03:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9)\n5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.835 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6a9070-27cd-44e3-a152-cd22912d64ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.837 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.863 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:44 localhost systemd[1]: libpod-conmon-5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9.scope: Deactivated successfully. Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.882 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost kernel: device tap3673812c-f0 left promiscuous mode Dec 2 05:03:44 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e101 e101: 6 total, 6 up, 6 in Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.892 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.896 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcaea61-7b56-4a01-b59f-ca2c5cc9655d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.919 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cacb33c3-9535-4be7-9564-8ee955daa90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.920 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8bf485-9ac6-4eab-aa10-dc2b3a29da80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost podman[309592]: 2025-12-02 10:03:44.809646905 +0000 UTC m=+0.083161863 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.937 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[e488d0e0-c916-4881-b96a-d55e1af92e75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1196376, 'reachable_time': 23116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309640, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.939 160371 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:03:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:03:44.939 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[faf5c316-2475-49ed-96d2-f908a797a235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.953 281858 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Activated binding for port 31de197b-ef56-4d2a-9fa2-293715a60004 and host np0005541914.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.954 281858 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.955 281858 DEBUG nova.virt.libvirt.vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:03:23Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.955 281858 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.956 281858 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.956 281858 DEBUG os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.958 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost podman[309592]: 2025-12-02 10:03:44.959146236 +0000 UTC m=+0.232661304 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.960 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31de197b-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.961 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.962 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.966 281858 INFO os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.966 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.966 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.967 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.967 281858 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.967 281858 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deleting instance files /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del#033[00m Dec 2 05:03:44 localhost nova_compute[281854]: 2025-12-02 10:03:44.968 281858 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deletion of /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del complete#033[00m Dec 2 05:03:44 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:03:45 localhost systemd[1]: var-lib-containers-storage-overlay-73a803b833a5ff5c579638b33681dc94f55bac1f087c0b32e1bc859addffd561-merged.mount: Deactivated successfully. Dec 2 05:03:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5886c88cd844d5d769261d8f712fc0c863050130a274673349a42b1db3f379f9-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:45 localhost systemd[1]: run-netns-ovnmeta\x2d3673812c\x2df461\x2d4e86\x2d831f\x2db7a7821f4bda.mount: Deactivated successfully. Dec 2 05:03:45 localhost systemd[1]: var-lib-containers-storage-overlay-88e44ad48351df4bc5cf9273b4853724ba68f5d6925b7196bceece1b80907f57-merged.mount: Deactivated successfully. Dec 2 05:03:45 localhost systemd[1]: run-netns-ovnmeta\x2d62df5f27\x2dc8d9\x2d4d79\x2d9ad6\x2d2f32e63bf47f.mount: Deactivated successfully. Dec 2 05:03:45 localhost nova_compute[281854]: 2025-12-02 10:03:45.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:45 localhost nova_compute[281854]: 2025-12-02 10:03:45.949 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:03:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:03:47 localhost podman[309641]: 2025-12-02 10:03:47.447792627 +0000 UTC m=+0.089614698 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.) Dec 2 05:03:47 localhost systemd[1]: tmp-crun.8nzOeJ.mount: Deactivated successfully. Dec 2 05:03:47 localhost podman[309642]: 2025-12-02 10:03:47.50842044 +0000 UTC m=+0.147371994 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:03:47 localhost podman[309641]: 2025-12-02 10:03:47.517728298 +0000 UTC m=+0.159550419 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:03:47 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:03:47 localhost podman[309642]: 2025-12-02 10:03:47.569958936 +0000 UTC m=+0.208910460 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:03:47 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:03:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:48.437 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:47Z, description=, device_id=88a5a4f4-0c8e-40f7-81a0-9e11da229be3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=899a6997-58ef-4285-95b3-238237010220, ip_allocation=immediate, mac_address=fa:16:3e:3f:26:a6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:36Z, description=, dns_domain=, id=97ae066a-ecdb-4d1f-a021-787e342a02a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-624382811-network, port_security_enabled=True, project_id=dc1edab5ae5d43f08b967b5bf594f8b5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=52168, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=452, status=ACTIVE, subnets=['1815dd23-acbf-4703-8ca8-599f5aab162a'], tags=[], tenant_id=dc1edab5ae5d43f08b967b5bf594f8b5, updated_at=2025-12-02T10:03:39Z, vlan_transparent=None, network_id=97ae066a-ecdb-4d1f-a021-787e342a02a4, port_security_enabled=False, project_id=dc1edab5ae5d43f08b967b5bf594f8b5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=490, status=DOWN, tags=[], tenant_id=dc1edab5ae5d43f08b967b5bf594f8b5, updated_at=2025-12-02T10:03:47Z on network 97ae066a-ecdb-4d1f-a021-787e342a02a4#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.507 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.507 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.507 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.530 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.531 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.531 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.531 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.532 281858 DEBUG oslo_concurrency.processutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:48 localhost dnsmasq[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/addn_hosts - 1 addresses Dec 2 05:03:48 localhost podman[309703]: 2025-12-02 10:03:48.698422176 +0000 UTC m=+0.061828455 container kill 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:03:48 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/host Dec 2 05:03:48 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/opts Dec 2 05:03:48 localhost ovn_controller[154505]: 2025-12-02T10:03:48Z|00118|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:03:48 localhost nova_compute[281854]: 2025-12-02 10:03:48.873 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:03:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:03:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:03:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:48.977 263406 INFO neutron.agent.dhcp.agent [None req-16ee785d-a3c7-43fa-afef-65c35f7f671e - - - - - -] DHCP configuration for ports {'899a6997-58ef-4285-95b3-238237010220'} is completed#033[00m Dec 2 05:03:49 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:49 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1033273436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.095 281858 DEBUG oslo_concurrency.processutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.182 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.183 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.387 281858 WARNING nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.389 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11321MB free_disk=41.563968658447266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.390 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.390 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.444 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Migration for instance 63092ab0-9432-4c74-933e-e9d5428e6162 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.477 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.505 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.506 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Migration 899d75d5-bebe-4551-8a0f-b0309584472e is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.506 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.507 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.563 281858 DEBUG oslo_concurrency.processutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:49 localhost nova_compute[281854]: 2025-12-02 10:03:49.961 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:49 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:49 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3031907273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.004 281858 DEBUG oslo_concurrency.processutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.011 281858 DEBUG nova.compute.provider_tree [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.035 281858 DEBUG nova.scheduler.client.report [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.066 281858 DEBUG nova.compute.resource_tracker [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.067 281858 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.075 281858 INFO nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Migrating instance to np0005541914.localdomain finished successfully.#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.174 281858 INFO nova.scheduler.client.report [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Deleted allocation for migration 899d75d5-bebe-4551-8a0f-b0309584472e#033[00m Dec 2 05:03:50 localhost nova_compute[281854]: 2025-12-02 10:03:50.174 281858 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m Dec 2 05:03:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e102 e102: 6 total, 6 up, 6 in Dec 2 05:03:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:51.199 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:47Z, description=, device_id=88a5a4f4-0c8e-40f7-81a0-9e11da229be3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=899a6997-58ef-4285-95b3-238237010220, ip_allocation=immediate, mac_address=fa:16:3e:3f:26:a6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:36Z, description=, dns_domain=, id=97ae066a-ecdb-4d1f-a021-787e342a02a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-624382811-network, port_security_enabled=True, project_id=dc1edab5ae5d43f08b967b5bf594f8b5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=52168, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=452, status=ACTIVE, subnets=['1815dd23-acbf-4703-8ca8-599f5aab162a'], tags=[], tenant_id=dc1edab5ae5d43f08b967b5bf594f8b5, updated_at=2025-12-02T10:03:39Z, vlan_transparent=None, network_id=97ae066a-ecdb-4d1f-a021-787e342a02a4, port_security_enabled=False, project_id=dc1edab5ae5d43f08b967b5bf594f8b5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=490, status=DOWN, tags=[], tenant_id=dc1edab5ae5d43f08b967b5bf594f8b5, updated_at=2025-12-02T10:03:47Z on network 97ae066a-ecdb-4d1f-a021-787e342a02a4#033[00m Dec 2 05:03:51 localhost dnsmasq[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/addn_hosts - 1 addresses Dec 2 05:03:51 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/host Dec 2 05:03:51 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/opts Dec 2 05:03:51 localhost podman[309783]: 2025-12-02 10:03:51.449818354 +0000 UTC m=+0.072656875 container kill 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:51 localhost systemd[1]: tmp-crun.hH2vOD.mount: Deactivated successfully. Dec 2 05:03:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:51.653 263406 INFO neutron.agent.dhcp.agent [None req-13cd270d-62df-4f51-a2c9-1aec4c2acaaf - - - - - -] DHCP configuration for ports {'899a6997-58ef-4285-95b3-238237010220'} is completed#033[00m Dec 2 05:03:52 localhost nova_compute[281854]: 2025-12-02 10:03:52.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:52 localhost nova_compute[281854]: 2025-12-02 10:03:52.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:03:52 localhost nova_compute[281854]: 2025-12-02 10:03:52.830 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:03:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:53.032 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=31de197b-ef56-4d2a-9fa2-293715a60004, ip_allocation=immediate, mac_address=fa:16:3e:8f:bb:bd, name=tempest-parent-17247491, network_id=62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, port_security_enabled=True, project_id=cccbafb2e3c343b2aab51714734bddce, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=14, security_groups=['5c93e274-85ac-42d3-b949-bdb62e6b8c39'], standard_attr_id=324, status=DOWN, tags=[], tenant_id=cccbafb2e3c343b2aab51714734bddce, trunk_details=sub_ports=[], trunk_id=5b1dd84a-69f3-4e17-8604-49965c03b89c, updated_at=2025-12-02T10:03:51Z on network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f#033[00m Dec 2 05:03:53 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 2 addresses Dec 2 05:03:53 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:03:53 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:03:53 localhost podman[309821]: 2025-12-02 10:03:53.282696968 +0000 UTC m=+0.066556352 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:53 localhost systemd[1]: tmp-crun.491ZLY.mount: Deactivated successfully. Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.345 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.346 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.346 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.347 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:03:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:03:53.480 263406 INFO neutron.agent.dhcp.agent [None req-bd4c171a-2e18-4988-bba5-0bc844afee44 - - - - - -] DHCP configuration for ports {'31de197b-ef56-4d2a-9fa2-293715a60004'} is completed#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.908 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.927 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.927 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.928 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.929 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.930 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.952 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.952 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.953 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.953 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:03:53 localhost nova_compute[281854]: 2025-12-02 10:03:53.954 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:03:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2587233418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.424 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:54 localhost podman[309864]: 2025-12-02 10:03:54.461679429 +0000 UTC m=+0.098629920 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:54 localhost podman[309864]: 2025-12-02 10:03:54.476201697 +0000 UTC m=+0.113152238 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:54 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.500 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.501 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.743 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.745 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11311MB free_disk=41.7004280090332GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.746 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.747 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.830 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.831 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.831 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.882 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:03:54 localhost nova_compute[281854]: 2025-12-02 10:03:54.965 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:03:55 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2649201936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:03:55 localhost nova_compute[281854]: 2025-12-02 10:03:55.397 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:03:55 localhost nova_compute[281854]: 2025-12-02 10:03:55.403 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:03:55 localhost nova_compute[281854]: 2025-12-02 10:03:55.498 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:03:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:03:55.498 2 INFO neutron.agent.securitygroups_rpc [None req-e52c4e8f-c1be-4de8-b00c-43719449fd5b 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:03:55 localhost nova_compute[281854]: 2025-12-02 10:03:55.501 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:03:55 localhost nova_compute[281854]: 2025-12-02 10:03:55.501 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:03:55 localhost dnsmasq[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/addn_hosts - 0 addresses Dec 2 05:03:55 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/host Dec 2 05:03:55 localhost dnsmasq-dhcp[308334]: read /var/lib/neutron/dhcp/3673812c-f461-4e86-831f-b7a7821f4bda/opts Dec 2 05:03:55 localhost podman[309925]: 2025-12-02 10:03:55.802035047 +0000 UTC m=+0.069359766 container kill 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:03:57 localhost nova_compute[281854]: 2025-12-02 10:03:57.401 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:57 localhost nova_compute[281854]: 2025-12-02 10:03:57.490 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:57 localhost nova_compute[281854]: 2025-12-02 10:03:57.912 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:03:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:03:58 localhost podman[309947]: 2025-12-02 10:03:58.446853875 +0000 UTC m=+0.087089882 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:03:58 localhost podman[309947]: 2025-12-02 10:03:58.486060504 +0000 UTC m=+0.126296561 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:03:58 localhost systemd[1]: tmp-crun.uyxf4D.mount: Deactivated successfully. Dec 2 05:03:58 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:03:58 localhost podman[309948]: 2025-12-02 10:03:58.517011982 +0000 UTC m=+0.152281836 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 05:03:58 localhost podman[309948]: 2025-12-02 10:03:58.598134412 +0000 UTC m=+0.233404246 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:03:58 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:03:58 localhost nova_compute[281854]: 2025-12-02 10:03:58.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.120 281858 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.121 281858 INFO nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.235 281858 DEBUG nova.compute.manager [None req-c26b12f1-728d-4822-85f2-643a4a363367 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:03:59 localhost dnsmasq[308334]: exiting on receipt of SIGTERM Dec 2 05:03:59 localhost podman[310015]: 2025-12-02 10:03:59.658993532 +0000 UTC m=+0.055577198 container kill 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:03:59 localhost systemd[1]: libpod-1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2.scope: Deactivated successfully. Dec 2 05:03:59 localhost ovn_controller[154505]: 2025-12-02T10:03:59Z|00119|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:03:59 localhost podman[310030]: 2025-12-02 10:03:59.735461609 +0000 UTC m=+0.065489574 container died 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.756 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2-userdata-shm.mount: Deactivated successfully. Dec 2 05:03:59 localhost podman[310030]: 2025-12-02 10:03:59.779859066 +0000 UTC m=+0.109887001 container cleanup 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:03:59 localhost systemd[1]: libpod-conmon-1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2.scope: Deactivated successfully. Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:03:59 localhost podman[310037]: 2025-12-02 10:03:59.856715322 +0000 UTC m=+0.175040674 container remove 1b22bb38a40c045fc2a47645e1a1a7cec84234f360093d901ddb7a461f8e88d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.867 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:59 localhost ovn_controller[154505]: 2025-12-02T10:03:59Z|00120|binding|INFO|Releasing lport 07dfafb4-0984-469d-a49c-9faf3746b302 from this chassis (sb_readonly=0) Dec 2 05:03:59 localhost ovn_controller[154505]: 2025-12-02T10:03:59Z|00121|binding|INFO|Setting lport 07dfafb4-0984-469d-a49c-9faf3746b302 down in Southbound Dec 2 05:03:59 localhost kernel: device tap07dfafb4-09 left promiscuous mode Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.884 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:03:59 localhost nova_compute[281854]: 2025-12-02 10:03:59.971 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:00.112 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.3/24', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=07dfafb4-0984-469d-a49c-9faf3746b302) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:00.114 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 07dfafb4-0984-469d-a49c-9faf3746b302 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis#033[00m Dec 2 05:04:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:00.118 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3673812c-f461-4e86-831f-b7a7821f4bda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:00.119 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7895fb35-6480-4d70-b80d-13d1ccb84199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e103 e103: 6 total, 6 up, 6 in Dec 2 05:04:00 localhost systemd[1]: var-lib-containers-storage-overlay-7b24a22bfd2247520c320aa8b36a4cd59aff7c93df00851a3bdf42877c37d8eb-merged.mount: Deactivated successfully. Dec 2 05:04:00 localhost nova_compute[281854]: 2025-12-02 10:04:00.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:01 localhost systemd[1]: run-netns-qdhcp\x2d3673812c\x2df461\x2d4e86\x2d831f\x2db7a7821f4bda.mount: Deactivated successfully. Dec 2 05:04:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:01.443 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:01 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:01.899 2 INFO neutron.agent.securitygroups_rpc [None req-d7c6b922-a31a-45e0-b3f4-c5bd99f50015 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:04:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:01.939 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:01Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=54433c73-7e5c-481c-b64c-19e9cfd6e56f, ip_allocation=immediate, mac_address=fa:16:3e:bb:b6:1c, name=tempest-parent-146896978, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:06Z, description=, dns_domain=, id=13bbad22-ab61-4b1f-849e-c651aa8f3297, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1859087569-network, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25848, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=342, status=ACTIVE, subnets=['a62c0502-5155-4c20-aaad-4cc8bce976da'], tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:03:07Z, vlan_transparent=None, network_id=13bbad22-ab61-4b1f-849e-c651aa8f3297, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['576d6513-029b-4880-bb0b-58094b586b90'], standard_attr_id=537, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:04:01Z on network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:04:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e104 e104: 6 total, 6 up, 6 in Dec 2 05:04:02 localhost systemd[1]: tmp-crun.TcdiUZ.mount: Deactivated successfully. Dec 2 05:04:02 localhost podman[310077]: 2025-12-02 10:04:02.297678664 +0000 UTC m=+0.083111764 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:02 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 2 addresses Dec 2 05:04:02 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:04:02 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:04:02 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:02.518 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:02 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:02.592 263406 INFO neutron.agent.dhcp.agent [None req-27ffb734-3d7a-4ec4-acb0-0feda9702f62 - - - - - -] DHCP configuration for ports {'54433c73-7e5c-481c-b64c-19e9cfd6e56f'} is completed#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:03.048 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:03.049 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:03.050 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e105 e105: 6 total, 6 up, 6 in Dec 2 05:04:03 localhost ovn_controller[154505]: 2025-12-02T10:04:03Z|00122|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:03 localhost nova_compute[281854]: 2025-12-02 10:04:03.737 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:04 localhost openstack_network_exporter[242845]: ERROR 10:04:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:04:04 localhost openstack_network_exporter[242845]: ERROR 10:04:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:04 localhost openstack_network_exporter[242845]: ERROR 10:04:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:04 localhost openstack_network_exporter[242845]: ERROR 10:04:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:04:04 localhost openstack_network_exporter[242845]: Dec 2 05:04:04 localhost openstack_network_exporter[242845]: ERROR 10:04:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:04:04 localhost openstack_network_exporter[242845]: Dec 2 05:04:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:04.111 2 INFO neutron.agent.securitygroups_rpc [None req-477510e9-c030-4124-bb5e-ce2ad555248a 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']#033[00m Dec 2 05:04:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:04:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1748178721' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:04:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:04:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1748178721' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:04:04 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 1 addresses Dec 2 05:04:04 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:04:04 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:04:04 localhost podman[310116]: 2025-12-02 10:04:04.42741533 +0000 UTC m=+0.074420562 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:04:04 localhost nova_compute[281854]: 2025-12-02 10:04:04.974 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:05 localhost dnsmasq[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/addn_hosts - 0 addresses Dec 2 05:04:05 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/host Dec 2 05:04:05 localhost dnsmasq-dhcp[307978]: read /var/lib/neutron/dhcp/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f/opts Dec 2 05:04:05 localhost podman[310157]: 2025-12-02 10:04:05.791866802 +0000 UTC m=+0.068143634 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e106 e106: 6 total, 6 up, 6 in Dec 2 05:04:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:06 localhost podman[240799]: time="2025-12-02T10:04:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:04:06 localhost podman[240799]: @ - - [02/Dec/2025:10:04:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159754 "" "Go-http-client/1.1" Dec 2 05:04:06 localhost podman[240799]: @ - - [02/Dec/2025:10:04:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20202 "" "Go-http-client/1.1" Dec 2 05:04:06 localhost nova_compute[281854]: 2025-12-02 10:04:06.203 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost ovn_controller[154505]: 2025-12-02T10:04:06Z|00123|binding|INFO|Releasing lport fbe9f539-2caa-4225-b0aa-ee0756eec0f0 from this chassis (sb_readonly=0) Dec 2 05:04:06 localhost kernel: device tapfbe9f539-2c left promiscuous mode Dec 2 05:04:06 localhost nova_compute[281854]: 2025-12-02 10:04:06.227 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost ovn_controller[154505]: 2025-12-02T10:04:06Z|00124|binding|INFO|Setting lport fbe9f539-2caa-4225-b0aa-ee0756eec0f0 down in Southbound Dec 2 05:04:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:06.237 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fbe9f539-2caa-4225-b0aa-ee0756eec0f0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:06.238 160221 INFO neutron.agent.ovn.metadata.agent [-] Port fbe9f539-2caa-4225-b0aa-ee0756eec0f0 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f unbound from our chassis#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:06.243 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:06.244 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[058b8528-025a-4b06-9bce-e25d99fa0a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:06 localhost nova_compute[281854]: 2025-12-02 10:04:06.253 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost nova_compute[281854]: 2025-12-02 10:04:06.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:06 localhost nova_compute[281854]: 2025-12-02 10:04:06.877 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.595 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.595 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.596 281858 INFO nova.compute.manager [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Unshelving#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.694 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.695 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.698 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_requests' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.715 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.728 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.728 281858 INFO nova.compute.claims [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Claim successful on node np0005541913.localdomain#033[00m Dec 2 05:04:07 localhost nova_compute[281854]: 2025-12-02 10:04:07.858 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/476037169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.282 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.291 281858 DEBUG nova.compute.provider_tree [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.323 281858 DEBUG nova.scheduler.client.report [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.352 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.458 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.458 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.459 281858 DEBUG nova.network.neutron [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:08 localhost nova_compute[281854]: 2025-12-02 10:04:08.545 281858 DEBUG nova.network.neutron [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.107 281858 DEBUG nova.network.neutron [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.132 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.135 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.136 281858 INFO nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating image(s)#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.177 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.183 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.240 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.285 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.291 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "1ce5597317ee1701cfc96dd9b078f17a61568b4b" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.292 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "1ce5597317ee1701cfc96dd9b078f17a61568b4b" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.348 281858 DEBUG nova.virt.libvirt.imagebackend [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/0e87d55f-56a4-4da8-9198-c633785685ee/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/0e87d55f-56a4-4da8-9198-c633785685ee/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.419 281858 DEBUG nova.virt.libvirt.imagebackend [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Selected location: {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/0e87d55f-56a4-4da8-9198-c633785685ee/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.420 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] cloning images/0e87d55f-56a4-4da8-9198-c633785685ee@snap to None/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.609 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "1ce5597317ee1701cfc96dd9b078f17a61568b4b" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.867 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'migration_context' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:09 localhost nova_compute[281854]: 2025-12-02 10:04:09.975 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] flattening vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.066 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:10 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:10.811 2 INFO neutron.agent.securitygroups_rpc [None req-4cc1fa1d-9a41-40fb-9e7e-ba331f6b18b7 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:04:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.901 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Image rbd:vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.902 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.903 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ensure instance console log exists: /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.903 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.904 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.904 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.907 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:03:46Z,direct_url=,disk_format='raw',id=0e87d55f-56a4-4da8-9198-c633785685ee,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-02T10:04:04Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'boot_index': 0, 'size': 0, 'encryption_options': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.913 281858 WARNING nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.915 281858 DEBUG nova.virt.libvirt.host [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.916 281858 DEBUG nova.virt.libvirt.host [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.918 281858 DEBUG nova.virt.libvirt.host [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541913.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.919 281858 DEBUG nova.virt.libvirt.host [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.920 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.920 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:03:46Z,direct_url=,disk_format='raw',id=0e87d55f-56a4-4da8-9198-c633785685ee,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-12-02T10:04:04Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.921 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.922 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.922 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.923 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.923 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.923 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.924 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.924 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.925 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.925 281858 DEBUG nova.virt.hardware [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.926 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:10 localhost nova_compute[281854]: 2025-12-02 10:04:10.951 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:04:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:04:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/534215597' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:04:11 localhost systemd[1]: tmp-crun.tIxpU4.mount: Deactivated successfully. Dec 2 05:04:11 localhost podman[310436]: 2025-12-02 10:04:11.462870088 +0000 UTC m=+0.099850653 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.477 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:11 localhost podman[310436]: 2025-12-02 10:04:11.506100634 +0000 UTC m=+0.143081199 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 2 05:04:11 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.523 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.531 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.931 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:04:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1346093737' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.988 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:11 localhost nova_compute[281854]: 2025-12-02 10:04:11.991 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_devices' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.019 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] End _get_guest_xml xml= Dec 2 05:04:12 localhost nova_compute[281854]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:04:12 localhost nova_compute[281854]: instance-00000006 Dec 2 05:04:12 localhost nova_compute[281854]: 131072 Dec 2 05:04:12 localhost nova_compute[281854]: 1 Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: tempest-UnshelveToHostMultiNodesTest-server-2084001492 Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:10 Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: 128 Dec 2 05:04:12 localhost nova_compute[281854]: 1 Dec 2 05:04:12 localhost nova_compute[281854]: 0 Dec 2 05:04:12 localhost nova_compute[281854]: 0 Dec 2 05:04:12 localhost nova_compute[281854]: 1 Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: tempest-UnshelveToHostMultiNodesTest-557689334-project-member Dec 2 05:04:12 localhost nova_compute[281854]: tempest-UnshelveToHostMultiNodesTest-557689334 Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: RDO Dec 2 05:04:12 localhost nova_compute[281854]: OpenStack Compute Dec 2 05:04:12 localhost nova_compute[281854]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 2 05:04:12 localhost nova_compute[281854]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:04:12 localhost nova_compute[281854]: 268e09a3-7abe-4037-a14a-068e7b8a78fb Dec 2 05:04:12 localhost nova_compute[281854]: Virtual Machine Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: hvm Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: /dev/urandom Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: Dec 2 05:04:12 localhost nova_compute[281854]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.046 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.068 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.069 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.070 281858 INFO nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Using config drive#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.107 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.134 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.165 281858 DEBUG nova.objects.instance [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'keypairs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.272 281858 INFO nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating config drive at /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.279 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67jthe8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.408 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp67jthe8n" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.449 281858 DEBUG nova.storage.rbd_utils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.455 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.670 281858 DEBUG oslo_concurrency.processutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:12 localhost nova_compute[281854]: 2025-12-02 10:04:12.672 281858 INFO nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting local config drive /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config because it was imported into RBD.#033[00m Dec 2 05:04:12 localhost systemd-machined[84262]: New machine qemu-4-instance-00000006. Dec 2 05:04:12 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000006. Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.084 281858 DEBUG nova.compute.manager [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.085 281858 DEBUG nova.virt.libvirt.driver [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.086 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.087 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.095 281858 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance spawned successfully.#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.111 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.114 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.133 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.133 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.134 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Started (Lifecycle Event)#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.154 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.158 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:13 localhost nova_compute[281854]: 2025-12-02 10:04:13.175 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Dec 2 05:04:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e107 e107: 6 total, 6 up, 6 in Dec 2 05:04:14 localhost ovn_controller[154505]: 2025-12-02T10:04:14Z|00125|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:14 localhost nova_compute[281854]: 2025-12-02 10:04:14.162 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:15 localhost nova_compute[281854]: 2025-12-02 10:04:15.003 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:15 localhost nova_compute[281854]: 2025-12-02 10:04:15.071 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:15 localhost nova_compute[281854]: 2025-12-02 10:04:15.225 281858 DEBUG nova.compute.manager [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:04:15 localhost systemd[1]: tmp-crun.ipugDF.mount: Deactivated successfully. Dec 2 05:04:15 localhost podman[310612]: 2025-12-02 10:04:15.513238117 +0000 UTC m=+0.139371160 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:04:15 localhost podman[310612]: 2025-12-02 10:04:15.546037833 +0000 UTC m=+0.172170866 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Dec 2 05:04:15 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:04:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.105 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.111 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a966654efc63eb79f395da865ed495916856f318e31034e86d5a2b1abae24291" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 05:04:16 localhost nova_compute[281854]: 2025-12-02 10:04:16.278 281858 DEBUG oslo_concurrency.lockutils [None req-ed1bd6d1-7bf7-4190-b304-63c91a3b4709 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 8.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:16 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:16.387 2 INFO neutron.agent.securitygroups_rpc [req-bec2fcab-0b29-48c5-8c73-7c95715690aa req-3ce61e55-77a0-41a7-a01c-658bb353c505 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']#033[00m Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.728 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Tue, 02 Dec 2025 10:04:16 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bfeed26e-68d4-4062-80d2-ed5b766fcfaa x-openstack-request-id: req-bfeed26e-68d4-4062-80d2-ed5b766fcfaa _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.728 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "1839415d-f60e-4a1c-bcf9-a79f9f7cb24d", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/1839415d-f60e-4a1c-bcf9-a79f9f7cb24d"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/1839415d-f60e-4a1c-bcf9-a79f9f7cb24d"}]}, {"id": "45a99238-6f19-4f9e-be82-6ef3af1dcb31", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/45a99238-6f19-4f9e-be82-6ef3af1dcb31"}]}, {"id": "82beb986-6d20-42dc-b738-1cef87dee30f", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/82beb986-6d20-42dc-b738-1cef87dee30f"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/82beb986-6d20-42dc-b738-1cef87dee30f"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.728 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-bfeed26e-68d4-4062-80d2-ed5b766fcfaa request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 05:04:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:16.731 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/82beb986-6d20-42dc-b738-1cef87dee30f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a966654efc63eb79f395da865ed495916856f318e31034e86d5a2b1abae24291" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 2 05:04:16 localhost nova_compute[281854]: 2025-12-02 10:04:16.737 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:17 localhost dnsmasq[307978]: exiting on receipt of SIGTERM Dec 2 05:04:17 localhost podman[310646]: 2025-12-02 10:04:17.312633135 +0000 UTC m=+0.057852509 container kill 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:04:17 localhost systemd[1]: libpod-2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2.scope: Deactivated successfully. Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.372 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Tue, 02 Dec 2025 10:04:16 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5c64cb57-59d0-4f58-9c1b-155a6e8d9224 x-openstack-request-id: req-5c64cb57-59d0-4f58-9c1b-155a6e8d9224 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.372 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "82beb986-6d20-42dc-b738-1cef87dee30f", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/82beb986-6d20-42dc-b738-1cef87dee30f"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/82beb986-6d20-42dc-b738-1cef87dee30f"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.372 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/82beb986-6d20-42dc-b738-1cef87dee30f used request id req-5c64cb57-59d0-4f58-9c1b-155a6e8d9224 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.376 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '09cae3217c5e430b8dbe17828669a978', 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'hostId': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.376 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.382 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fdf030b5-cbfc-4627-9f9d-051ad00efc8b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.376425', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4374ce9c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': 'f922141570756648eda573ffbc9cdfc117f584746b7b905f7657e75b04624933'}]}, 'timestamp': '2025-12-02 10:04:17.386462', '_unique_id': 'd9dc6505dd594aae85bf3cf5febc9efd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.388 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.390 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:04:17 localhost podman[310661]: 2025-12-02 10:04:17.390890009 +0000 UTC m=+0.063332416 container died 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.402 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.407 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost systemd[1]: tmp-crun.W69xsv.mount: Deactivated successfully. Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.424 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.425 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3fc7d917-d0d1-4a4a-80b3-abfa84efa979', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.390401', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43787b78-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': 'ddbd25cfd11850970f14e552530b89e18593e5f53126a569822f2401e8827ecb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.390401', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43788a46-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': 'bed9aefa369929a21255888b20b05824606ce83114800c77eac5c035a9f6a1d8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.390401', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '437b4786-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': 'bb2177b154ecd4ff0ce551cefa34daca5bca687bae49516344e43e1a984264e3'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.390401', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '437b60cc-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': 'b3d4d907e3709399d0a7ff774816fa24a54938f3cf9064308e192509f862c014'}]}, 'timestamp': '2025-12-02 10:04:17.426001', '_unique_id': 'e2f49ff913e743098aa088868bd03846'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.427 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.430 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.430 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.431 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.431 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ca38731f-58ee-4ea4-ace5-67c71512a8c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.431239', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '437c41a4-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': '872a5c7e7b5138ddc1039a826063ecead8fc9782902cefffb8e29f92b76a7f81'}]}, 'timestamp': '2025-12-02 10:04:17.431804', '_unique_id': '7ee2b993cc5e46c29d58b470ef14bfef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.432 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.433 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.435 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '644b84f2-dda9-4537-bf33-4b49fa4459e3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.435734', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '437cf2de-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': 'b25ae39e56c899618aef2226b0341b5c4be7d242079d4fa697c614c491e0ccc4'}]}, 'timestamp': '2025-12-02 10:04:17.436305', '_unique_id': 'a4d0b4d0d58f4956bafae91c8133192a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.436 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.437 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:04:17 localhost podman[310661]: 2025-12-02 10:04:17.449108037 +0000 UTC m=+0.121550384 container cleanup 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:04:17 localhost systemd[1]: libpod-conmon-2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2.scope: Deactivated successfully. Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.466 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.467 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.493 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.494 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '17ee1acc-f87f-4b65-936b-945053c8099a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.440560', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4381afb8-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '9bf565e33c6e8ddef478efb0749b3a3ec0251647591238aadd80cec4d1eb6f94'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.440560', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4381c854-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '2570bed2e7944db4ed6117891cf0a1a8442d3b3770d06353dad01cc566d2e54c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.440560', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '4385bcca-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': 'c04e967d652b8b7835747cc07968e5358b116e99bd4c01229d3662b83d66ed67'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.440560', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '4385d23c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '51268590f285a99e3654e8c08385b4b9ccfcc56a3d34e232bfeeeca057e08c6d'}]}, 'timestamp': '2025-12-02 10:04:17.494445', '_unique_id': 'ab10bb66032f48dd829494651c17576b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.495 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.496 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.508 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.508 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.509 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.509 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a71c7d09-d153-4af4-b328-b9bdcb5b3fdc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.508105', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '438801ec-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '7bccd844f4f0763d124f75f01e2bbf5b69dee5573ff67ea2dbd6e67c31650a6b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.508105', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '438815ba-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '26655661ddd4bc74a97066808eb6310a3da0b029b2c23a8becd5b33bda0c4d4d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 760, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.508105', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '43882618-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '658b62569d084e87b731b2380f5dc6d4ae9ecaf504a277a34c19ea4e3175018d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.508105', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '43883824-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': 'dfbed061465d6ad068129652dbd0a0ff4faa1d25410dd5c6d14ce651ef00ecc2'}]}, 'timestamp': '2025-12-02 10:04:17.510153', '_unique_id': '046341e0dcc448c7a40ea44397d03c2c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.511 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.512 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.513 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '436e3be7-8fd3-49e5-9d89-01232434cb4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.513146', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4388f44e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': '36ef6424547bdb1ec3a4150a49aaa92e2b788b28262529c7b7eea5c099a7aaee'}]}, 'timestamp': '2025-12-02 10:04:17.517088', '_unique_id': 'fcacae156ff5426ab5e65c3e79d13e8a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.519 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.522 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:04:17 localhost podman[310662]: 2025-12-02 10:04:17.539294189 +0000 UTC m=+0.205559760 container remove 2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.541 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 16460000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.553 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/cpu volume: 4250000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d8156a4-4a71-4e8d-bc10-bf8f574520af', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16460000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:04:17.523116', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '438cfe54-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.759971123, 'message_signature': '518c9a77385f15b92922c4709a0b7c46bcb91de197cb17981360a220ed2a62a0'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 4250000000, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'timestamp': '2025-12-02T10:04:17.523116', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '438eec46-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.772770825, 'message_signature': '80f2788aa81147bb84c5680b3472eae54b7a9c3092f3759934e401f95ecf86f5'}]}, 'timestamp': '2025-12-02 10:04:17.554060', '_unique_id': '62005cafbda24737adee96db661937cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.554 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.555 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.555 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.555 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b5442586-1661-44d9-a848-41f327a471f3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.555658', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '438f341c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': '6f9b7c1d46d252db4c7306f9a9ac7e6a6d0b72c66a03fb8c327811a1fd27ea85'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.555658', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '438f3bba-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': 'd6811bd3a785f48f9ef7479452e66d4a7a74b576f1248724b3addec74bffb1ef'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.555658', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '438f4290-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': '01b460ef4cf0db55b904511aa5d14b09388013a61f26dc3b46ea0d5b270c3e37'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.555658', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '438f4916-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': 'aac5bfbc4f8a1d533619a737b14e619c2f6a007368b6b564a5dc664397beda00'}]}, 'timestamp': '2025-12-02 10:04:17.556391', '_unique_id': '4e40fc3963d04c5bb850dcb0a086df9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.556 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.557 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.557 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9477040e-195c-44a5-88e9-cfb0c4c580b6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.557429', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '438f7936-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': 'a5c422de8b7edd44e629be07e77d818ba008b9d9312408dbfa5a4b2132f784cd'}]}, 'timestamp': '2025-12-02 10:04:17.557656', '_unique_id': '98b39ccc26b249b3a3209e485bc7a532'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.558 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '895e4572-a2f3-4863-a3f8-6e9700baf252', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.558564', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '438fa604-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': '0ee86c968c2290ef62de14a85c21d00f10f4c1685225064e72067218d9537ec1'}]}, 'timestamp': '2025-12-02 10:04:17.558784', '_unique_id': '83dddf486f0e4ecb9e233d6fa72e0c75'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.559 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.latency volume: 1047513403 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.latency volume: 2316922 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '727edf32-3211-40da-9c4e-b7804f200882', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.559695', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '438fd160-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': 'e5e750e12e1057c8d611a6c63e418dc90a26ebe772e868b29f20942b47b960e0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.559695', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '438fd82c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '3ce2348de76efc113bbaf6c8637887ffecb68790a0716043d7b4bdf0ae5b5ca6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1047513403, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.559695', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '438fdeda-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '49a76adb64c0a6c6c12bc350904a38144ecec088fec15642ecf7dcdb226d9799'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 2316922, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.559695', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '438fe556-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '6cff2d142fb5f914662add5a6875a21e72713d255f5eea335f4cffc9ee3719ef'}]}, 'timestamp': '2025-12-02 10:04:17.560390', '_unique_id': '1e4da737de564bc49ebffb507796c442'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.560 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.561 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.561 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.561 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.561 12 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb: ceilometer.compute.pollsters.NoVolumeException Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69b04d7f-bfad-4b6e-85b5-0f6029d53f66', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:04:17.561379', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '4390133c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.759971123, 'message_signature': '31d0d0bdd078db893609f617adf5140d21ec2f9355c4f19dc553d85886cd6009'}]}, 'timestamp': '2025-12-02 10:04:17.561739', '_unique_id': '571759ea63d64f3383fe288777ffe157'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.562 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '799db7e3-9444-4e76-821a-5959c034cfb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.563119', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4390570c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': '7bbb00bdb8f46e8d54b31b7fe540251dbc448e8fe0638f6f7f6b15ab2970be91'}]}, 'timestamp': '2025-12-02 10:04:17.563317', '_unique_id': '839b3e820134431a959192a0ace89851'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.563 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.564 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.564 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d791f65-86d7-45c5-a90f-2c48ab2e627b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.564350', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4390895c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': 'a1fa16c6bb97f67d04a0f83bce96ff7a5e8985749486acda944644f68be0fee3'}]}, 'timestamp': '2025-12-02 10:04:17.564607', '_unique_id': '631e562b066c4efbbb3c03a4d69b6603'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.565 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7760aa84-f6c0-46cb-b4f3-1e90e87579d9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.565539', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4390b648-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': 'c134bc7aa250d75374d0adeb6965feb1d8f29b0417e404a1bb5a1125d7bebd2c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.565539', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4390bd1e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': 'cd2aa20860a299f96149fbf8f6ea3b97552709ecbb1911808c1df9343269bf79'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.565539', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '4390c390-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '0ac98cd35db0a12a528ad4e15c1f4bcd410c81d346509d95b6be014ad5fa4173'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.565539', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '4390c9e4-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': 'a79cff9d0860c5c0dc56d19dc0a7ea5c67a1888cafe919e4a769dd1d5687d014'}]}, 'timestamp': '2025-12-02 10:04:17.566241', '_unique_id': '10fc7d8b645c4a2681c4e3fc7701cba4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.566 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.567 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.572 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.572 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.572 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a337f2c-8941-4b4e-85b7-205e5372deff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.573033', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4391da96-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': '4f844231400cbffe8c2cb1b326a0e1f86db49fcf8714ebe4033af23a3a014ab2'}]}, 'timestamp': '2025-12-02 10:04:17.573240', '_unique_id': 'faf92fed76f848e5bb4ff27522c81291'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.573 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.574 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.574 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.574 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.574 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.574 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cf05612b-d064-4fca-b771-721662ba8f65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.574186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4392076e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '4b6cc9146883c898c2c9f460607b9a8be28ac4cd07411a177fd0f869f9957017'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.574186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43920e26-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '64e38319c9cf99f8de379c7a6e1f0fff5e974ef33e49b589c4295cd41c2d0ef9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 23775232, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.574186', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '43921574-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': 'c4701453e9fdd0c274fefc272d27cd1d408ba06b01c3524859c26c0e76807d1c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2048, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.574186', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '43921bf0-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '4343a6efcc13ea409353688af6cde2197212096a8bf32a8600a52ca586b4a1e5'}]}, 'timestamp': '2025-12-02 10:04:17.574895', '_unique_id': '4f9727773620470f833f89dc92329063'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.575 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af28eb7e-8358-4dff-8f23-e473e9b081ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:04:17.575863', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '4392490e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.595470982, 'message_signature': 'b4895d045899b99a320fb25b9968f538e63aadba14a629eebc4f84ba484e02eb'}]}, 'timestamp': '2025-12-02 10:04:17.576065', '_unique_id': '9a38f13eaf7748a99a9e6ce2a1ae16a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.576 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.577 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.577 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.577 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.577 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c5204faa-c903-4388-8a95-8927c3ee853a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.576981', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '43927492-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': 'b2f82edd68332d330d75d232b5d4078b94145072abeb944f447f455c1032b19d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.576981', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '43927b72-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.609516538, 'message_signature': '5ab92c864cd16385071a71b957e652320238d24978e5c416ac553f2b4a3801bb'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.576981', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '439281d0-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': 'a563dd48d2f1921d733cfdd2b213c2af50d1acfda3c37693af5b5d6d98ad6cbd'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.576981', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '43928810-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.626474332, 'message_signature': '48abf2d0479de67d3a8d4461600f0f807d816776c239bf0293edcc0f9eb4a37f'}]}, 'timestamp': '2025-12-02 10:04:17.577680', '_unique_id': 'adeb7366c55742e6bfa519a7b0becaee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.578 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 DEBUG ceilometer.compute.pollsters [-] 268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5db9fce-e119-4cea-99b4-d778bdc32adf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:04:17.578630', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4392b524-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '356eb9bd340d69c63f2e988d4f8ae9870c004067c4cebffcaa96f7fc88030007'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:04:17.578630', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4392bbc8-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.65967854, 'message_signature': '59ad6278572357e58650a51cd7cb3a00f5f447707f20c5c8be308a174e677958'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-vda', 'timestamp': '2025-12-02T10:04:17.578630', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '4392c212-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': '68209aa0146d18d7ca3bbd7efe40fa76b1b277259aa585cec196ed07aab4817b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '96d084f3c3184bf4ac7b9635139dd4aa', 'user_name': None, 'project_id': '09cae3217c5e430b8dbe17828669a978', 'project_name': None, 'resource_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb-sda', 'timestamp': '2025-12-02T10:04:17.578630', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-2084001492', 'name': 'instance-00000006', 'instance_id': '268e09a3-7abe-4037-a14a-068e7b8a78fb', 'instance_type': 'm1.nano', 'host': '7cad67b582e6426efa9c66e424f7a681c87315f17829394b22a9c3a2', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '82beb986-6d20-42dc-b738-1cef87dee30f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '0e87d55f-56a4-4da8-9198-c633785685ee'}, 'image_ref': '0e87d55f-56a4-4da8-9198-c633785685ee', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '4392c848-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12019.687085392, 'message_signature': 'a7ca20a58fe03f96a1fe7e81cb139ac95986dbcecc0a9d3ae7c9c5b6d879365e'}]}, 'timestamp': '2025-12-02 10:04:17.579304', '_unique_id': 'bb894090447141c580fc63b9b1df23f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:04:17 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:04:17.579 12 ERROR oslo_messaging.notify.messaging Dec 2 05:04:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:17.760 263406 INFO neutron.agent.dhcp.agent [None req-9d5fb568-c306-4dac-9877-2d927af3520d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.793 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.794 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.795 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.795 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.796 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.797 281858 INFO nova.compute.manager [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Terminating instance#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.799 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.799 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.800 281858 DEBUG nova.network.neutron [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:17 localhost nova_compute[281854]: 2025-12-02 10:04:17.909 281858 DEBUG nova.network.neutron [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:18.013 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:04:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:04:18 localhost nova_compute[281854]: 2025-12-02 10:04:18.160 281858 DEBUG nova.network.neutron [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:18 localhost nova_compute[281854]: 2025-12-02 10:04:18.177 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:18 localhost nova_compute[281854]: 2025-12-02 10:04:18.178 281858 DEBUG nova.compute.manager [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 2 05:04:18 localhost podman[310687]: 2025-12-02 10:04:18.213451375 +0000 UTC m=+0.093127173 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 05:04:18 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully. Dec 2 05:04:18 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 5.676s CPU time. Dec 2 05:04:18 localhost systemd-machined[84262]: Machine qemu-4-instance-00000006 terminated. Dec 2 05:04:18 localhost podman[310687]: 2025-12-02 10:04:18.262264681 +0000 UTC m=+0.141940469 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 05:04:18 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:04:18 localhost podman[310688]: 2025-12-02 10:04:18.279258645 +0000 UTC m=+0.157791622 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:04:18 localhost podman[310688]: 2025-12-02 10:04:18.291831502 +0000 UTC m=+0.170364469 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:04:18 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:18.296 2 INFO neutron.agent.securitygroups_rpc [req-3542c6d6-3e9a-4403-b3b7-62c55b0a2440 req-a1b9621e-b7b6-4f72-a92d-ded5fdb895c8 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']#033[00m Dec 2 05:04:18 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:04:18 localhost systemd[1]: tmp-crun.nrXs53.mount: Deactivated successfully. Dec 2 05:04:18 localhost systemd[1]: var-lib-containers-storage-overlay-93ad5d2b9af04d633613c8f460d48e56923a84b4e7f2b732ec5f908e2b44d433-merged.mount: Deactivated successfully. Dec 2 05:04:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d42155b5a72db54c622c9ed42c83a8217719c79542b37e5b2087004cd3850e2-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:18 localhost systemd[1]: run-netns-qdhcp\x2d62df5f27\x2dc8d9\x2d4d79\x2d9ad6\x2d2f32e63bf47f.mount: Deactivated successfully. Dec 2 05:04:18 localhost nova_compute[281854]: 2025-12-02 10:04:18.403 281858 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.#033[00m Dec 2 05:04:18 localhost nova_compute[281854]: 2025-12-02 10:04:18.403 281858 DEBUG nova.objects.instance [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'resources' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:18.890 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.076 281858 INFO nova.virt.libvirt.driver [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting instance files /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.078 281858 INFO nova.virt.libvirt.driver [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deletion of /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del complete#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.149 281858 INFO nova.compute.manager [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.152 281858 DEBUG oslo.service.loopingcall [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.153 281858 DEBUG nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.154 281858 DEBUG nova.network.neutron [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.215 281858 DEBUG nova.network.neutron [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.228 281858 DEBUG nova.network.neutron [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.243 281858 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 0.09 seconds to deallocate network for instance.#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.306 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.307 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.390 281858 DEBUG oslo_concurrency.processutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2552744948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.850 281858 DEBUG oslo_concurrency.processutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.858 281858 DEBUG nova.compute.provider_tree [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.898 281858 DEBUG nova.scheduler.client.report [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.915 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:19 localhost nova_compute[281854]: 2025-12-02 10:04:19.973 281858 INFO nova.scheduler.client.report [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Deleted allocations for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb#033[00m Dec 2 05:04:20 localhost nova_compute[281854]: 2025-12-02 10:04:20.030 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:20 localhost nova_compute[281854]: 2025-12-02 10:04:20.049 281858 DEBUG oslo_concurrency.lockutils [None req-03d25e47-17ae-4548-9c68-61f7dc41ea30 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:20 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:04:20 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:04:20 localhost nova_compute[281854]: 2025-12-02 10:04:20.073 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 e108: 6 total, 6 up, 6 in Dec 2 05:04:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:20 localhost nova_compute[281854]: 2025-12-02 10:04:20.923 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:21.651 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005541914.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:01Z, description=, device_id=82e23ec3-1d57-4166-9ba0-839ded943a78, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-livemigrationtest-server-39688497, extra_dhcp_opts=[], fixed_ips=[], id=54433c73-7e5c-481c-b64c-19e9cfd6e56f, ip_allocation=immediate, mac_address=fa:16:3e:bb:b6:1c, name=tempest-parent-146896978, network_id=13bbad22-ab61-4b1f-849e-c651aa8f3297, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['576d6513-029b-4880-bb0b-58094b586b90'], standard_attr_id=537, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, trunk_details=sub_ports=[], trunk_id=3bda7a6b-42c4-4395-9870-485919ec4ac2, updated_at=2025-12-02T10:04:20Z on network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:04:21 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 2 addresses Dec 2 05:04:21 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:04:21 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:04:21 localhost podman[310876]: 2025-12-02 10:04:21.878334131 +0000 UTC m=+0.062307268 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:04:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:22.055 263406 INFO neutron.agent.dhcp.agent [None req-76d7b50f-ef35-4f8e-978e-85c25ca3db70 - - - - - -] DHCP configuration for ports {'54433c73-7e5c-481c-b64c-19e9cfd6e56f'} is completed#033[00m Dec 2 05:04:22 localhost nova_compute[281854]: 2025-12-02 10:04:22.182 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:23 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:04:23 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:23.966 2 INFO neutron.agent.securitygroups_rpc [req-65998e7d-c26a-45a5-8676-fd86a74e40b3 req-1863187d-62f6-4dd8-8a63-a2eeaa9837d3 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['dfa589a5-e6b3-419a-9bd7-e5b7ecfd8cd6']#033[00m Dec 2 05:04:24 localhost ovn_controller[154505]: 2025-12-02T10:04:24Z|00126|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:24 localhost nova_compute[281854]: 2025-12-02 10:04:24.328 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:25 localhost nova_compute[281854]: 2025-12-02 10:04:25.063 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:25 localhost nova_compute[281854]: 2025-12-02 10:04:25.076 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:04:25 localhost podman[310898]: 2025-12-02 10:04:25.436564225 +0000 UTC m=+0.072227684 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:04:25 localhost podman[310898]: 2025-12-02 10:04:25.444671862 +0000 UTC m=+0.080335351 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125) Dec 2 05:04:25 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:04:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:27 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:27.147 2 INFO neutron.agent.securitygroups_rpc [req-6d0b23d6-658e-4a79-96cf-b8ca52a56a83 req-dc334455-9197-4ae2-b241-5b724098ced8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['aadc9cbe-01f3-422d-afff-735004537d1d']#033[00m Dec 2 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:04:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:04:29 localhost podman[310917]: 2025-12-02 10:04:29.442682009 +0000 UTC m=+0.082591020 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:04:29 localhost podman[310917]: 2025-12-02 10:04:29.45801607 +0000 UTC m=+0.097925131 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:04:29 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:04:29 localhost podman[310918]: 2025-12-02 10:04:29.512879927 +0000 UTC m=+0.147091125 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:29 localhost podman[310918]: 2025-12-02 10:04:29.580009843 +0000 UTC m=+0.214221001 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:29.592 2 INFO neutron.agent.securitygroups_rpc [req-1c594721-186d-4097-a94c-c620e0979c63 req-4b6914f0-ee8c-4772-ac7a-a3075974ee64 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['41f7c9c8-7668-4604-9cee-64c2ce6fa2c0']#033[00m Dec 2 05:04:29 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.077 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.078 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.079 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.079 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.093 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:30 localhost nova_compute[281854]: 2025-12-02 10:04:30.094 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:04:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:32 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:32.190 2 INFO neutron.agent.securitygroups_rpc [req-10b28dbb-d460-47e0-a99a-7ab94b16b5dd req-5be6a150-24be-4b75-af16-d1e63344c43d 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['20cbc49d-f7c3-4e2e-87e6-586884a8dc4b']#033[00m Dec 2 05:04:32 localhost ovn_controller[154505]: 2025-12-02T10:04:32Z|00127|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:32 localhost nova_compute[281854]: 2025-12-02 10:04:32.827 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:33.329 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:33 localhost nova_compute[281854]: 2025-12-02 10:04:33.330 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:33.331 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:04:33 localhost nova_compute[281854]: 2025-12-02 10:04:33.401 281858 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:33 localhost nova_compute[281854]: 2025-12-02 10:04:33.401 281858 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:04:33 localhost nova_compute[281854]: 2025-12-02 10:04:33.434 281858 DEBUG nova.compute.manager [None req-55824f8c-c006-4d30-974b-4804d6b3b430 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:34 localhost ovn_controller[154505]: 2025-12-02T10:04:34Z|00128|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:34 localhost openstack_network_exporter[242845]: ERROR 10:04:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:04:34 localhost openstack_network_exporter[242845]: ERROR 10:04:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:34 localhost openstack_network_exporter[242845]: ERROR 10:04:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:04:34 localhost openstack_network_exporter[242845]: ERROR 10:04:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:04:34 localhost openstack_network_exporter[242845]: Dec 2 05:04:34 localhost openstack_network_exporter[242845]: ERROR 10:04:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:04:34 localhost openstack_network_exporter[242845]: Dec 2 05:04:34 localhost nova_compute[281854]: 2025-12-02 10:04:34.072 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:35 localhost nova_compute[281854]: 2025-12-02 10:04:35.119 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:35 localhost nova_compute[281854]: 2025-12-02 10:04:35.905 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:36 localhost podman[240799]: time="2025-12-02T10:04:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:04:36 localhost podman[240799]: @ - - [02/Dec/2025:10:04:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157930 "" "Go-http-client/1.1" Dec 2 05:04:36 localhost podman[240799]: @ - - [02/Dec/2025:10:04:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19728 "" "Go-http-client/1.1" Dec 2 05:04:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:36.406 2 INFO neutron.agent.securitygroups_rpc [req-7ec4157a-3973-4fe9-90a5-6b7e95187ed9 req-9adda286-3e5c-4f67-99d9-e6d6658a3dd8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:36.789 2 INFO neutron.agent.securitygroups_rpc [req-41dd90c2-f92d-4e4d-a9a2-5512726d06ed req-eda4abe0-dc4d-48d0-a211-5598e3a12357 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:37.362 263406 INFO neutron.agent.linux.ip_lib [None req-e8709c5f-bad0-4cd8-a86a-bcb0776bda8c - - - - - -] Device tapc1f0bd46-6b cannot be used as it has no MAC address#033[00m Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.377 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating tmpfile /var/lib/nova/instances/tmpvcgqfy3k to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.387 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:37 localhost kernel: device tapc1f0bd46-6b entered promiscuous mode Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.395 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:37 localhost NetworkManager[5965]: [1764669877.3965] manager: (tapc1f0bd46-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Dec 2 05:04:37 localhost ovn_controller[154505]: 2025-12-02T10:04:37Z|00129|binding|INFO|Claiming lport c1f0bd46-6bae-4902-9292-e19c6e88557a for this chassis. Dec 2 05:04:37 localhost ovn_controller[154505]: 2025-12-02T10:04:37Z|00130|binding|INFO|c1f0bd46-6bae-4902-9292-e19c6e88557a: Claiming unknown Dec 2 05:04:37 localhost systemd-udevd[310976]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.404 281858 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Dec 2 05:04:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:37.408 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9e3da8770844ad5b5552298a24dcbd2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d349b8-3ce0-4286-826a-479b1dd2a429, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1f0bd46-6bae-4902-9292-e19c6e88557a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:37.410 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c1f0bd46-6bae-4902-9292-e19c6e88557a in datapath 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6 bound to our chassis#033[00m Dec 2 05:04:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:37.412 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:04:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:37.413 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6b913cd4-f142-4c5b-96e0-1360f53dd31f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost ovn_controller[154505]: 2025-12-02T10:04:37Z|00131|binding|INFO|Setting lport c1f0bd46-6bae-4902-9292-e19c6e88557a ovn-installed in OVS Dec 2 05:04:37 localhost ovn_controller[154505]: 2025-12-02T10:04:37Z|00132|binding|INFO|Setting lport c1f0bd46-6bae-4902-9292-e19c6e88557a up in Southbound Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.473 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:37 localhost journal[230136]: ethtool ioctl error on tapc1f0bd46-6b: No such device Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.496 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:37 localhost nova_compute[281854]: 2025-12-02 10:04:37.525 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:38.318 2 INFO neutron.agent.securitygroups_rpc [req-8e540b7a-da71-4acd-ab56-fd3bce480c0a req-799fda44-ad0c-42d1-806d-41b3bc34424c 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']#033[00m Dec 2 05:04:38 localhost podman[311047]: Dec 2 05:04:38 localhost podman[311047]: 2025-12-02 10:04:38.384830956 +0000 UTC m=+0.081765588 container create 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:04:38 localhost systemd[1]: Started libpod-conmon-2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88.scope. Dec 2 05:04:38 localhost podman[311047]: 2025-12-02 10:04:38.338710273 +0000 UTC m=+0.035644905 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:04:38 localhost systemd[1]: Started libcrun container. Dec 2 05:04:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909ccbf636b56e5fcb70f402308fe6a02f149a317eaed6dc848cd26938534901/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:38 localhost podman[311047]: 2025-12-02 10:04:38.467374995 +0000 UTC m=+0.164309597 container init 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:04:38 localhost podman[311047]: 2025-12-02 10:04:38.481688978 +0000 UTC m=+0.178623570 container start 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:04:38 localhost dnsmasq[311070]: started, version 2.85 cachesize 150 Dec 2 05:04:38 localhost dnsmasq[311070]: DNS service limited to local subnets Dec 2 05:04:38 localhost dnsmasq[311070]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:04:38 localhost dnsmasq[311070]: warning: no upstream servers configured Dec 2 05:04:38 localhost dnsmasq-dhcp[311070]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:04:38 localhost dnsmasq[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/addn_hosts - 0 addresses Dec 2 05:04:38 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/host Dec 2 05:04:38 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/opts Dec 2 05:04:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:38.518 263406 INFO neutron.agent.linux.ip_lib [None req-032822b7-5695-4a23-85cb-89838df6da4a - - - - - -] Device tapbd990115-99 cannot be used as it has no MAC address#033[00m Dec 2 05:04:38 localhost nova_compute[281854]: 2025-12-02 10:04:38.577 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost kernel: device tapbd990115-99 entered promiscuous mode Dec 2 05:04:38 localhost NetworkManager[5965]: [1764669878.5818] manager: (tapbd990115-99): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Dec 2 05:04:38 localhost ovn_controller[154505]: 2025-12-02T10:04:38Z|00133|binding|INFO|Claiming lport bd990115-9909-4e4e-a861-f26c2f53a28c for this chassis. Dec 2 05:04:38 localhost ovn_controller[154505]: 2025-12-02T10:04:38Z|00134|binding|INFO|bd990115-9909-4e4e-a861-f26c2f53a28c: Claiming unknown Dec 2 05:04:38 localhost nova_compute[281854]: 2025-12-02 10:04:38.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost ovn_controller[154505]: 2025-12-02T10:04:38Z|00135|binding|INFO|Setting lport bd990115-9909-4e4e-a861-f26c2f53a28c ovn-installed in OVS Dec 2 05:04:38 localhost nova_compute[281854]: 2025-12-02 10:04:38.621 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost nova_compute[281854]: 2025-12-02 10:04:38.654 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost nova_compute[281854]: 2025-12-02 10:04:38.677 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:38 localhost ovn_controller[154505]: 2025-12-02T10:04:38Z|00136|binding|INFO|Setting lport bd990115-9909-4e4e-a861-f26c2f53a28c up in Southbound Dec 2 05:04:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:38.882 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bd990115-9909-4e4e-a861-f26c2f53a28c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:38.885 160221 INFO neutron.agent.ovn.metadata.agent [-] Port bd990115-9909-4e4e-a861-f26c2f53a28c in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 bound to our chassis#033[00m Dec 2 05:04:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:38.888 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c851ffbc-ac95-4b63-ad5b-c219b2577bdd IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:38.889 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45d02cf1-f511-4416-b7c1-b37c417f16f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:38.890 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f39ceb36-aaf2-4a53-b2b1-1adb87cfa4af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:39 localhost nova_compute[281854]: 2025-12-02 10:04:39.074 281858 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Dec 2 05:04:39 localhost systemd[1]: tmp-crun.2uEwZK.mount: Deactivated successfully. Dec 2 05:04:39 localhost nova_compute[281854]: 2025-12-02 10:04:39.689 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:39 localhost nova_compute[281854]: 2025-12-02 10:04:39.689 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:39 localhost nova_compute[281854]: 2025-12-02 10:04:39.690 281858 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:39.707 263406 INFO neutron.agent.dhcp.agent [None req-62354099-c5ec-4d6a-906d-f9f9ff98d970 - - - - - -] DHCP configuration for ports {'3f99beb7-5057-4f25-a68f-132a387d4a7b'} is completed#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.168 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:40 localhost podman[311130]: Dec 2 05:04:40 localhost podman[311130]: 2025-12-02 10:04:40.191557211 +0000 UTC m=+0.127674827 container create 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:40 localhost podman[311130]: 2025-12-02 10:04:40.110340178 +0000 UTC m=+0.046457864 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:04:40 localhost systemd[1]: Started libpod-conmon-5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d.scope. Dec 2 05:04:40 localhost systemd[1]: Started libcrun container. Dec 2 05:04:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94147dbae9838956e714723c867733a25e47b3b6162526a89da5f485c251bb56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:40 localhost podman[311130]: 2025-12-02 10:04:40.271426618 +0000 UTC m=+0.207544234 container init 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:04:40 localhost podman[311130]: 2025-12-02 10:04:40.281396454 +0000 UTC m=+0.217514090 container start 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:04:40 localhost dnsmasq[311147]: started, version 2.85 cachesize 150 Dec 2 05:04:40 localhost dnsmasq[311147]: DNS service limited to local subnets Dec 2 05:04:40 localhost dnsmasq[311147]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:04:40 localhost dnsmasq[311147]: warning: no upstream servers configured Dec 2 05:04:40 localhost dnsmasq-dhcp[311147]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:04:40 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 0 addresses Dec 2 05:04:40 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:04:40 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:04:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:40.332 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:40 localhost systemd[1]: tmp-crun.ScqZBq.mount: Deactivated successfully. Dec 2 05:04:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:40.479 263406 INFO neutron.agent.dhcp.agent [None req-d175de16-5bdd-4b06-bb3b-a8e8d0ce6b90 - - - - - -] DHCP configuration for ports {'0999b431-c362-4180-a7a9-8664fe007369'} is completed#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.746 281858 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.766 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.768 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.769 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating instance directory: /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.770 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Ensure instance console log exists: /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.771 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.772 281858 DEBUG nova.virt.libvirt.vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:04:33Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.773 281858 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.774 281858 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.775 281858 DEBUG os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.776 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.776 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.777 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.781 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.782 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54433c73-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.783 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54433c73-7e, col_values=(('external_ids', {'iface-id': '54433c73-7e5c-481c-b64c-19e9cfd6e56f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:b6:1c', 'vm-uuid': '82e23ec3-1d57-4166-9ba0-839ded943a78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.785 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.788 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.790 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.792 281858 INFO os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.793 281858 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Dec 2 05:04:40 localhost nova_compute[281854]: 2025-12-02 10:04:40.793 281858 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Dec 2 05:04:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:41 localhost nova_compute[281854]: 2025-12-02 10:04:41.766 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:04:42 localhost podman[311150]: 2025-12-02 10:04:42.453056372 +0000 UTC m=+0.087300556 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:04:42 localhost podman[311150]: 2025-12-02 10:04:42.468101005 +0000 UTC m=+0.102345209 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Dec 2 05:04:42 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:04:43 localhost nova_compute[281854]: 2025-12-02 10:04:43.432 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:44.397 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:43Z, description=, device_id=279e244d-14ba-4911-a425-d38d92768269, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=55fb1997-25fe-4011-9820-773c0aa66e3d, ip_allocation=immediate, mac_address=fa:16:3e:b4:9c:02, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:35Z, description=, dns_domain=, id=45d02cf1-f511-4416-b7c1-b37c417f16f9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1627103925-network, port_security_enabled=True, project_id=50df25ee29424615807a458690cdf8d7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33331, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=681, status=ACTIVE, subnets=['34aa8025-e49d-4c09-aefd-41c4d8900224'], tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:04:36Z, vlan_transparent=None, network_id=45d02cf1-f511-4416-b7c1-b37c417f16f9, port_security_enabled=False, project_id=50df25ee29424615807a458690cdf8d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=710, status=DOWN, tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:04:43Z on network 45d02cf1-f511-4416-b7c1-b37c417f16f9#033[00m Dec 2 05:04:44 localhost nova_compute[281854]: 2025-12-02 10:04:44.783 281858 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f updated with migration profile {'migrating_to': 'np0005541913.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Dec 2 05:04:44 localhost nova_compute[281854]: 2025-12-02 10:04:44.785 281858 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Dec 2 05:04:44 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 1 addresses Dec 2 05:04:44 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:04:44 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:04:44 localhost podman[311187]: 2025-12-02 10:04:44.89982017 +0000 UTC m=+0.063583183 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:04:45 localhost sshd[311208]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.202 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost systemd[1]: Created slice User Slice of UID 42436. Dec 2 05:04:45 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Dec 2 05:04:45 localhost systemd-logind[757]: New session 73 of user nova. Dec 2 05:04:45 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Dec 2 05:04:45 localhost systemd[1]: Starting User Manager for UID 42436... Dec 2 05:04:45 localhost systemd[311212]: Queued start job for default target Main User Target. Dec 2 05:04:45 localhost systemd[311212]: Created slice User Application Slice. Dec 2 05:04:45 localhost systemd[311212]: Started Mark boot as successful after the user session has run 2 minutes. Dec 2 05:04:45 localhost systemd[311212]: Started Daily Cleanup of User's Temporary Directories. Dec 2 05:04:45 localhost systemd[311212]: Reached target Paths. Dec 2 05:04:45 localhost systemd[311212]: Reached target Timers. Dec 2 05:04:45 localhost systemd[311212]: Starting D-Bus User Message Bus Socket... Dec 2 05:04:45 localhost systemd[311212]: Starting Create User's Volatile Files and Directories... Dec 2 05:04:45 localhost systemd[311212]: Listening on D-Bus User Message Bus Socket. Dec 2 05:04:45 localhost systemd[311212]: Reached target Sockets. Dec 2 05:04:45 localhost systemd[311212]: Finished Create User's Volatile Files and Directories. Dec 2 05:04:45 localhost systemd[311212]: Reached target Basic System. Dec 2 05:04:45 localhost systemd[311212]: Reached target Main User Target. Dec 2 05:04:45 localhost systemd[311212]: Startup finished in 170ms. Dec 2 05:04:45 localhost systemd[1]: Started User Manager for UID 42436. Dec 2 05:04:45 localhost systemd[1]: Started Session 73 of User nova. Dec 2 05:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:04:45 localhost podman[311228]: 2025-12-02 10:04:45.645648442 +0000 UTC m=+0.071275078 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:04:45 localhost podman[311228]: 2025-12-02 10:04:45.682029685 +0000 UTC m=+0.107656311 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 05:04:45 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:04:45 localhost kernel: device tap54433c73-7e entered promiscuous mode Dec 2 05:04:45 localhost NetworkManager[5965]: [1764669885.7233] manager: (tap54433c73-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/27) Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.723 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost systemd-udevd[311259]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:45 localhost ovn_controller[154505]: 2025-12-02T10:04:45Z|00137|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this additional chassis. Dec 2 05:04:45 localhost ovn_controller[154505]: 2025-12-02T10:04:45Z|00138|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:45 localhost ovn_controller[154505]: 2025-12-02T10:04:45Z|00139|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this additional chassis. Dec 2 05:04:45 localhost ovn_controller[154505]: 2025-12-02T10:04:45Z|00140|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43 Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.729 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.731 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost NetworkManager[5965]: [1764669885.7428] device (tap54433c73-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 2 05:04:45 localhost NetworkManager[5965]: [1764669885.7438] device (tap54433c73-7e): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 2 05:04:45 localhost ovn_controller[154505]: 2025-12-02T10:04:45Z|00141|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f ovn-installed in OVS Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.749 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost systemd-machined[84262]: New machine qemu-5-instance-00000008. Dec 2 05:04:45 localhost systemd[1]: Started Virtual Machine qemu-5-instance-00000008. Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.784 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:45 localhost nova_compute[281854]: 2025-12-02 10:04:45.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.045 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.046 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Started (Lifecycle Event)#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.064 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:46.206 263406 INFO neutron.agent.dhcp.agent [None req-1c008398-2d3a-4e17-910d-4f3fbd976cf7 - - - - - -] DHCP configuration for ports {'55fb1997-25fe-4011-9820-773c0aa66e3d'} is completed#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.858 281858 DEBUG nova.virt.driver [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.859 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Resumed (Lifecycle Event)#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.880 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.886 281858 DEBUG nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 2 05:04:46 localhost nova_compute[281854]: 2025-12-02 10:04:46.907 281858 INFO nova.compute.manager [None req-a2e5fb31-f32c-40ed-828f-a970ba3add1b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During the sync_power process the instance has moved from host np0005541914.localdomain to host np0005541913.localdomain#033[00m Dec 2 05:04:47 localhost systemd[1]: session-73.scope: Deactivated successfully. Dec 2 05:04:47 localhost systemd-logind[757]: Session 73 logged out. Waiting for processes to exit. Dec 2 05:04:47 localhost systemd-logind[757]: Removed session 73. Dec 2 05:04:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:47.851 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:46Z, description=, device_id=11e16c5e-46e1-4a00-8cde-eb7c634beb6e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f642efd7-a23a-4ea5-ac71-0a9b43d62652, ip_allocation=immediate, mac_address=fa:16:3e:01:87:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:34Z, description=, dns_domain=, id=26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1774083162-network, port_security_enabled=True, project_id=e9e3da8770844ad5b5552298a24dcbd2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50867, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=673, status=ACTIVE, subnets=['1fd9a2bb-1a18-4b88-9f27-6b97d2310288'], tags=[], tenant_id=e9e3da8770844ad5b5552298a24dcbd2, updated_at=2025-12-02T10:04:35Z, vlan_transparent=None, network_id=26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, port_security_enabled=False, project_id=e9e3da8770844ad5b5552298a24dcbd2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=721, status=DOWN, tags=[], tenant_id=e9e3da8770844ad5b5552298a24dcbd2, updated_at=2025-12-02T10:04:47Z on network 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6#033[00m Dec 2 05:04:48 localhost podman[311332]: 2025-12-02 10:04:48.103431925 +0000 UTC m=+0.081995115 container kill 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:04:48 localhost dnsmasq[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/addn_hosts - 1 addresses Dec 2 05:04:48 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/host Dec 2 05:04:48 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/opts Dec 2 05:04:48 localhost systemd[1]: tmp-crun.DHAJxW.mount: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:04:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:04:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:48.352 263406 INFO neutron.agent.dhcp.agent [None req-64949150-9071-4bf4-90a9-dfc2d0ee4fb9 - - - - - -] DHCP configuration for ports {'f642efd7-a23a-4ea5-ac71-0a9b43d62652'} is completed#033[00m Dec 2 05:04:48 localhost podman[311354]: 2025-12-02 10:04:48.453714846 +0000 UTC m=+0.092671260 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:04:48 localhost ovn_controller[154505]: 2025-12-02T10:04:48Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:48 localhost ovn_controller[154505]: 2025-12-02T10:04:48Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:48 localhost podman[311353]: 2025-12-02 10:04:48.499293226 +0000 UTC m=+0.137858380 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350) Dec 2 05:04:48 localhost podman[311353]: 2025-12-02 10:04:48.508539103 +0000 UTC m=+0.147104267 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, name=ubi9-minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.) Dec 2 05:04:48 localhost podman[311354]: 2025-12-02 10:04:48.518750056 +0000 UTC m=+0.157706450 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:04:48 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:04:48 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:04:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00142|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this chassis. Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00143|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13 Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00144|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this chassis. Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00145|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43 Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00146|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f up in Southbound Dec 2 05:04:49 localhost ovn_controller[154505]: 2025-12-02T10:04:49Z|00147|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 up in Southbound Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.739 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.742 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.743 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e bound to our chassis#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.747 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c40d86e4-7101-443b-abce-328f7d1ea40e#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.756 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9e4f61-5953-496e-b2bc-69d651fffddb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.758 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc40d86e4-71 in ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.760 160340 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc40d86e4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.761 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[47f4e6d7-64ee-446f-a921-e9c9f65c3cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.762 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe7276a-0194-4453-871e-e584bf8eb253]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.773 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[18fe22a5-dc6a-4a16-b7cc-e80af19343b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:49.779 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:43Z, description=, device_id=279e244d-14ba-4911-a425-d38d92768269, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=55fb1997-25fe-4011-9820-773c0aa66e3d, ip_allocation=immediate, mac_address=fa:16:3e:b4:9c:02, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:35Z, description=, dns_domain=, id=45d02cf1-f511-4416-b7c1-b37c417f16f9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1627103925-network, port_security_enabled=True, project_id=50df25ee29424615807a458690cdf8d7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33331, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=681, status=ACTIVE, subnets=['34aa8025-e49d-4c09-aefd-41c4d8900224'], tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:04:36Z, vlan_transparent=None, network_id=45d02cf1-f511-4416-b7c1-b37c417f16f9, port_security_enabled=False, project_id=50df25ee29424615807a458690cdf8d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=710, status=DOWN, tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:04:43Z on network 45d02cf1-f511-4416-b7c1-b37c417f16f9#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.785 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[aacd252b-b49d-430f-bc18-909617f5e161]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.810 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3a1fc7-3960-4e7f-ae2a-358d69c45824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost NetworkManager[5965]: [1764669889.8182] manager: (tapc40d86e4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/28) Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.818 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1379c99e-b2d5-4fa9-aaaf-09d94553df35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost systemd-udevd[311405]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.845 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[643cb8a5-57a3-4961-8adc-17810e9a2df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.850 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[9adabb22-1535-4bd0-8756-329e8106fe19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-71: link becomes ready Dec 2 05:04:49 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-70: link becomes ready Dec 2 05:04:49 localhost NetworkManager[5965]: [1764669889.8707] device (tapc40d86e4-70): carrier: link connected Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.874 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8c22a0-3fa5-4691-bcfd-53daef2b0eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.891 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[e888d90a-9785-42a2-a7e0-6f686e04c689]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205201, 'reachable_time': 40862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311439, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.904 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b1386d08-322d-4282-83c8-7dee191f20b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:457f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1205201, 'tstamp': 1205201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311441, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.918 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[53b4a524-30f7-4562-a7cc-6c19094f1fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205201, 'reachable_time': 40862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311442, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.938 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6daf87-6396-46b0-9b00-59d3270a791f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 1 addresses Dec 2 05:04:49 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:04:49 localhost podman[311444]: 2025-12-02 10:04:49.971781309 +0000 UTC m=+0.049037613 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:04:49 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.986 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c8879fbe-0d6d-4de5-b797-36c69f762859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.988 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.988 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.989 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc40d86e4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:49 localhost nova_compute[281854]: 2025-12-02 10:04:49.990 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:49 localhost kernel: device tapc40d86e4-70 entered promiscuous mode Dec 2 05:04:49 localhost nova_compute[281854]: 2025-12-02 10:04:49.994 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:49.998 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc40d86e4-70, col_values=(('external_ids', {'iface-id': '60398627-924e-4353-b9ee-b86c24b6fc87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:49 localhost nova_compute[281854]: 2025-12-02 10:04:49.999 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ovn_controller[154505]: 2025-12-02T10:04:50Z|00148|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.000 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.003 160221 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.004 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1bb26c-8dff-4254-8208-b199dbf59c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.005 160221 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: global Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log /dev/log local0 debug Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log-tag haproxy-metadata-proxy-c40d86e4-7101-443b-abce-328f7d1ea40e Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: user root Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: group root Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: maxconn 1024 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: pidfile /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: daemon Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: defaults Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log global Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: mode http Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option httplog Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option dontlognull Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option http-server-close Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option forwardfor Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: retries 3 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout http-request 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout connect 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout client 32s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout server 32s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout http-keep-alive 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: listen listener Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: bind 169.254.169.254:80 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: http-request add-header X-OVN-Network-ID c40d86e4-7101-443b-abce-328f7d1ea40e Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.005 160221 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'env', 'PROCESS_TAG=haproxy-c40d86e4-7101-443b-abce-328f7d1ea40e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c40d86e4-7101-443b-abce-328f7d1ea40e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.007 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.241 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:50.248 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-cf58f353-04b9-463a-832f-2ee6517a222b req-e9021766-f952-4fcb-9d58-29ffe2b82e7c 4ea94a3d730c499a8a661131692645ce 497073c2347a4b2dbbf501873318fbd3 - - default default] This port is not SRIOV, skip binding for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f.#033[00m Dec 2 05:04:50 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:50.256 263406 INFO neutron.agent.dhcp.agent [None req-501d446f-4e06-4469-8098-de6632e7f437 - - - - - -] DHCP configuration for ports {'55fb1997-25fe-4011-9820-773c0aa66e3d'} is completed#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.407 281858 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Post operation of migration started#033[00m Dec 2 05:04:50 localhost podman[311498]: Dec 2 05:04:50 localhost podman[311498]: 2025-12-02 10:04:50.425845626 +0000 UTC m=+0.067901967 container create e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:04:50 localhost systemd[1]: Started libpod-conmon-e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980.scope. Dec 2 05:04:50 localhost systemd[1]: tmp-crun.1dhSF6.mount: Deactivated successfully. Dec 2 05:04:50 localhost podman[311498]: 2025-12-02 10:04:50.391320003 +0000 UTC m=+0.033376364 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:04:50 localhost systemd[1]: Started libcrun container. Dec 2 05:04:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f06f0939af8ced9e01822fd15f35fbfde05ec9e41ca9e0ac345284976c2f364/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:50 localhost podman[311498]: 2025-12-02 10:04:50.512996307 +0000 UTC m=+0.155052698 container init e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:04:50 localhost podman[311498]: 2025-12-02 10:04:50.52092434 +0000 UTC m=+0.162980701 container start e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:04:50 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [NOTICE] (311516) : New worker (311518) forked Dec 2 05:04:50 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [NOTICE] (311516) : Loading success. Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.579 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.581 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 53bdfc6a-79b0-43cf-92a6-99b85b988b28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.582 160221 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.590 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[bdefa7a7-ba4b-461b-89f1-ef48d5c15d77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.591 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13bbad22-a1 in ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.593 160340 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13bbad22-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.593 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[206ad9a1-00d6-416e-b8ab-6bef385097a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.595 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[50e685fa-cc8b-472a-ab7e-7da0f135fcea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.603 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[fb245f14-7b26-40e1-bc84-2186b79e805e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.615 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a787f23b-23d8-4e60-bf12-b07c5b325020]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.641 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[b12331eb-8db0-443a-90b2-a3ce42979dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.646 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bee95c-ba45-4d4f-b4b7-88248dba446c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost NetworkManager[5965]: [1764669890.6475] manager: (tap13bbad22-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/29) Dec 2 05:04:50 localhost systemd-udevd[311411]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.675 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[b99f6f6a-dbc1-45bc-b22f-cefe027be62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.679 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[99ea0efb-c159-4795-a874-0275d8b5c257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap13bbad22-a0: link becomes ready Dec 2 05:04:50 localhost NetworkManager[5965]: [1764669890.6980] device (tap13bbad22-a0): carrier: link connected Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.701 160351 DEBUG oslo.privsep.daemon [-] privsep: reply[379ab8be-4174-4829-928a-a110024008f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.719 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[be6b7b7a-3c0f-4378-8c9f-99fcc34d8496]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205284, 'reachable_time': 36060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311537, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.724 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.737 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[378586dd-80ac-48be-8f0a-47abbaa51adc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:4317'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1205284, 'tstamp': 1205284}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311538, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.746 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.746 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.747 281858 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.762 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[403893e8-df9a-42fc-a410-11990bb683bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205284, 'reachable_time': 36060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311539, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.786 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.800 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f99faaf0-b668-46bb-80b3-371c92cdd1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.858 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5a94fc09-c4bb-4c77-922d-1b1fccdff6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.860 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.860 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.861 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bbad22-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.864 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost kernel: device tap13bbad22-a0 entered promiscuous mode Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.868 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13bbad22-a0, col_values=(('external_ids', {'iface-id': '202be55f-4a2f-4e8a-884e-d4a72a4d525d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:50 localhost ovn_controller[154505]: 2025-12-02T10:04:50Z|00149|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.870 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:50 localhost nova_compute[281854]: 2025-12-02 10:04:50.881 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.882 160221 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.883 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[263550af-2e8c-4221-aa56-10e1bd135aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.884 160221 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: global Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log /dev/log local0 debug Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log-tag haproxy-metadata-proxy-13bbad22-ab61-4b1f-849e-c651aa8f3297 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: user root Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: group root Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: maxconn 1024 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: pidfile /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: daemon Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: defaults Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: log global Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: mode http Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option httplog Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option dontlognull Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option http-server-close Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: option forwardfor Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: retries 3 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout http-request 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout connect 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout client 32s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout server 32s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: timeout http-keep-alive 30s Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: listen listener Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: bind 169.254.169.254:80 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: server metadata /var/lib/neutron/metadata_proxy Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: http-request add-header X-OVN-Network-ID 13bbad22-ab61-4b1f-849e-c651aa8f3297 Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 2 05:04:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:50.884 160221 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'env', 'PROCESS_TAG=haproxy-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13bbad22-ab61-4b1f-849e-c651aa8f3297.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 2 05:04:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:51.291 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:46Z, description=, device_id=11e16c5e-46e1-4a00-8cde-eb7c634beb6e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f642efd7-a23a-4ea5-ac71-0a9b43d62652, ip_allocation=immediate, mac_address=fa:16:3e:01:87:24, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:34Z, description=, dns_domain=, id=26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-1774083162-network, port_security_enabled=True, project_id=e9e3da8770844ad5b5552298a24dcbd2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50867, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=673, status=ACTIVE, subnets=['1fd9a2bb-1a18-4b88-9f27-6b97d2310288'], tags=[], tenant_id=e9e3da8770844ad5b5552298a24dcbd2, updated_at=2025-12-02T10:04:35Z, vlan_transparent=None, network_id=26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, port_security_enabled=False, project_id=e9e3da8770844ad5b5552298a24dcbd2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=721, status=DOWN, tags=[], tenant_id=e9e3da8770844ad5b5552298a24dcbd2, updated_at=2025-12-02T10:04:47Z on network 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6#033[00m Dec 2 05:04:51 localhost podman[311571]: Dec 2 05:04:51 localhost podman[311571]: 2025-12-02 10:04:51.395174648 +0000 UTC m=+0.077381960 container create aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:51 localhost systemd[1]: Started libpod-conmon-aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5.scope. Dec 2 05:04:51 localhost systemd[1]: Started libcrun container. Dec 2 05:04:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717f1ef704ec9c5b6d7c4f85d43274eb21a1cf80205ee2cbc3615610a19b18d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:04:51 localhost podman[311571]: 2025-12-02 10:04:51.459271863 +0000 UTC m=+0.141479175 container init aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:04:51 localhost podman[311571]: 2025-12-02 10:04:51.361931479 +0000 UTC m=+0.044138811 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 2 05:04:51 localhost systemd[1]: tmp-crun.hz32uu.mount: Deactivated successfully. Dec 2 05:04:51 localhost podman[311571]: 2025-12-02 10:04:51.476291878 +0000 UTC m=+0.158499190 container start aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:51 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [NOTICE] (311611) : New worker (311616) forked Dec 2 05:04:51 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [NOTICE] (311611) : Loading success. Dec 2 05:04:51 localhost dnsmasq[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/addn_hosts - 1 addresses Dec 2 05:04:51 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/host Dec 2 05:04:51 localhost podman[311604]: 2025-12-02 10:04:51.533315075 +0000 UTC m=+0.062577226 container kill 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:04:51 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/opts Dec 2 05:04:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:51.704 263406 INFO neutron.agent.dhcp.agent [None req-26581c33-f744-4381-ace6-87b946c7b089 - - - - - -] DHCP configuration for ports {'f642efd7-a23a-4ea5-ac71-0a9b43d62652'} is completed#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.778 281858 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.803 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.820 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.821 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.821 281858 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.828 281858 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Dec 2 05:04:51 localhost journal[203664]: Domain id=5 name='instance-00000008' uuid=82e23ec3-1d57-4166-9ba0-839ded943a78 is tainted: custom-monitor Dec 2 05:04:51 localhost ovn_controller[154505]: 2025-12-02T10:04:51Z|00150|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:51 localhost ovn_controller[154505]: 2025-12-02T10:04:51Z|00151|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:51 localhost ovn_controller[154505]: 2025-12-02T10:04:51Z|00152|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:51 localhost nova_compute[281854]: 2025-12-02 10:04:51.973 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:52 localhost nova_compute[281854]: 2025-12-02 10:04:52.838 281858 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Dec 2 05:04:53 localhost ovn_controller[154505]: 2025-12-02T10:04:53Z|00153|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:53 localhost ovn_controller[154505]: 2025-12-02T10:04:53Z|00154|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:53 localhost ovn_controller[154505]: 2025-12-02T10:04:53Z|00155|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.581 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.846 281858 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.858 281858 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.880 281858 DEBUG nova.objects.instance [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.925 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.925 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.926 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:04:53 localhost nova_compute[281854]: 2025-12-02 10:04:53.926 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.495 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.531 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.531 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.532 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.532 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.533 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.554 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.555 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.555 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.555 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:04:54 localhost nova_compute[281854]: 2025-12-02 10:04:54.556 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:54.636 2 INFO neutron.agent.securitygroups_rpc [None req-5252ab83-90b7-4c17-ab41-150a0f430946 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:04:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:04:54.923 2 INFO neutron.agent.securitygroups_rpc [None req-a8a8282d-6793-4a84-80fc-24e3966f9a17 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:04:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4293966053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.005 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.077 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.078 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.083 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.083 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.245 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.348 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.350 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11070MB free_disk=41.70097732543945GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.350 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.351 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.398 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Applying migration context for instance 82e23ec3-1d57-4166-9ba0-839ded943a78 as it has an incoming, in-progress migration f83e1b81-4647-4642-b7c4-b4f369bef051. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.398 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.415 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.442 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.443 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance 82e23ec3-1d57-4166-9ba0-839ded943a78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.443 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.444 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1152MB phys_disk=41GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.504 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.600 281858 DEBUG nova.compute.manager [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.601 281858 DEBUG oslo_concurrency.lockutils [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.601 281858 DEBUG oslo_concurrency.lockutils [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.601 281858 DEBUG oslo_concurrency.lockutils [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.602 281858 DEBUG nova.compute.manager [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.602 281858 WARNING nova.compute.manager [req-262747ee-d656-4f22-bc15-e5465af16acf req-c58fd4d7-a041-4bfd-82f2-9e88a31911ab dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state deleting.#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.626 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.626 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.627 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.627 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.628 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.629 281858 INFO nova.compute.manager [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Terminating instance#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.630 281858 DEBUG nova.compute.manager [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 2 05:04:55 localhost podman[311679]: 2025-12-02 10:04:55.635881649 +0000 UTC m=+0.056728958 container kill 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:55 localhost dnsmasq[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/addn_hosts - 0 addresses Dec 2 05:04:55 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/host Dec 2 05:04:55 localhost dnsmasq-dhcp[311070]: read /var/lib/neutron/dhcp/26a036bb-7fc2-42d0-b324-4cf6bb77a9d6/opts Dec 2 05:04:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:04:55 localhost kernel: device tap54433c73-7e left promiscuous mode Dec 2 05:04:55 localhost NetworkManager[5965]: [1764669895.6961] device (tap54433c73-7e): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00156|binding|INFO|Releasing lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.706 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00157|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f down in Southbound Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00158|binding|INFO|Releasing lport ffcaba02-6808-4409-8458-941ca0af2e66 from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00159|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 down in Southbound Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00160|binding|INFO|Removing iface tap54433c73-7e ovn-installed in OVS Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.709 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00161|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00162|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00163|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.717 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.719 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.720 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis#033[00m Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.728 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:55 localhost systemd[1]: tmp-crun.NToL0U.mount: Deactivated successfully. Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.732 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4e1e00-745e-4ac8-8ab6-33f99187eb29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.735 160221 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace which is not needed anymore#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.734 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Deactivated successfully. Dec 2 05:04:55 localhost systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000008.scope: Consumed 1.810s CPU time. Dec 2 05:04:55 localhost systemd-machined[84262]: Machine qemu-5-instance-00000008 terminated. Dec 2 05:04:55 localhost podman[311712]: 2025-12-02 10:04:55.745238355 +0000 UTC m=+0.083296390 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.747 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost podman[311712]: 2025-12-02 10:04:55.755024127 +0000 UTC m=+0.093082212 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:04:55 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.787 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.802 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00164|binding|INFO|Releasing lport c1f0bd46-6bae-4902-9292-e19c6e88557a from this chassis (sb_readonly=0) Dec 2 05:04:55 localhost kernel: device tapc1f0bd46-6b left promiscuous mode Dec 2 05:04:55 localhost ovn_controller[154505]: 2025-12-02T10:04:55Z|00165|binding|INFO|Setting lport c1f0bd46-6bae-4902-9292-e19c6e88557a down in Southbound Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:55.821 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e9e3da8770844ad5b5552298a24dcbd2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d349b8-3ce0-4286-826a-479b1dd2a429, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c1f0bd46-6bae-4902-9292-e19c6e88557a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.874 281858 INFO nova.virt.libvirt.driver [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance destroyed successfully.#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.874 281858 DEBUG nova.objects.instance [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lazy-loading 'resources' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:04:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:04:55 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [NOTICE] (311516) : haproxy version is 2.8.14-c23fe91 Dec 2 05:04:55 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [NOTICE] (311516) : path to executable is /usr/sbin/haproxy Dec 2 05:04:55 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [WARNING] (311516) : Exiting Master process... Dec 2 05:04:55 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [ALERT] (311516) : Current worker (311518) exited with code 143 (Terminated) Dec 2 05:04:55 localhost neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[311512]: [WARNING] (311516) : All workers exited. Exiting... (0) Dec 2 05:04:55 localhost systemd[1]: libpod-e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980.scope: Deactivated successfully. Dec 2 05:04:55 localhost podman[311765]: 2025-12-02 10:04:55.906417127 +0000 UTC m=+0.070364263 container died e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:04:55 localhost podman[311765]: 2025-12-02 10:04:55.926375461 +0000 UTC m=+0.090322597 container cleanup e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.931 281858 DEBUG nova.virt.libvirt.vif [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-02T10:04:53Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.931 281858 DEBUG nova.network.os_vif_util [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.932 281858 DEBUG nova.network.os_vif_util [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.932 281858 DEBUG os_vif [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.933 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.934 281858 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54433c73-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.936 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.938 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.939 281858 INFO os_vif [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')#033[00m Dec 2 05:04:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:04:55 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1384798435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:04:55 localhost podman[311788]: 2025-12-02 10:04:55.96595286 +0000 UTC m=+0.055225669 container cleanup e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:04:55 localhost systemd[1]: libpod-conmon-e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980.scope: Deactivated successfully. Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.973 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.977 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:04:55 localhost nova_compute[281854]: 2025-12-02 10:04:55.991 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:04:56 localhost podman[311802]: 2025-12-02 10:04:56.000019281 +0000 UTC m=+0.062624406 container remove e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.003 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6601959a-9d80-49a9-a55f-c80b61b193e8]: (4, ('Tue Dec 2 10:04:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980)\ne8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980\nTue Dec 2 10:04:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980)\ne8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.004 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[49c553bb-6f88-4710-a4d8-a86a772afce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.005 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:56 localhost kernel: device tapc40d86e4-70 left promiscuous mode Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.009 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.013 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.013 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.021 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.023 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b832e38e-071d-44c2-afbc-8c63fdf4d517]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.048 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7939a2ba-4c44-4634-905c-2ded153dcd20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.050 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c9270f5b-e9b4-448e-a2d7-09432c8963e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.061 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[09a83e81-79a1-479c-b01f-19401fb16a83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205195, 'reachable_time': 25638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311837, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.062 160371 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.063 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[a6df44d3-e753-479f-9683-698c24e339e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.064 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.068 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 53bdfc6a-79b0-43cf-92a6-99b85b988b28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.068 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.069 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b01095-d35d-4246-981d-c5eafeaca956]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.069 160221 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace which is not needed anymore#033[00m Dec 2 05:04:56 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [NOTICE] (311611) : haproxy version is 2.8.14-c23fe91 Dec 2 05:04:56 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [NOTICE] (311611) : path to executable is /usr/sbin/haproxy Dec 2 05:04:56 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [WARNING] (311611) : Exiting Master process... Dec 2 05:04:56 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [ALERT] (311611) : Current worker (311616) exited with code 143 (Terminated) Dec 2 05:04:56 localhost neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[311600]: [WARNING] (311611) : All workers exited. Exiting... (0) Dec 2 05:04:56 localhost systemd[1]: libpod-aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5.scope: Deactivated successfully. Dec 2 05:04:56 localhost podman[311855]: 2025-12-02 10:04:56.285013886 +0000 UTC m=+0.067418235 container died aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:04:56 localhost podman[311855]: 2025-12-02 10:04:56.326933407 +0000 UTC m=+0.109337776 container cleanup aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:04:56 localhost podman[311869]: 2025-12-02 10:04:56.344341233 +0000 UTC m=+0.051880250 container cleanup aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:56 localhost systemd[1]: libpod-conmon-aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5.scope: Deactivated successfully. Dec 2 05:04:56 localhost podman[311885]: 2025-12-02 10:04:56.41153877 +0000 UTC m=+0.067791894 container remove aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.417 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9eef4df9-becd-4cf9-8738-59c9c7809aef]: (4, ('Tue Dec 2 10:04:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5)\naa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5\nTue Dec 2 10:04:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5)\naa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.419 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[74ec3334-3101-431b-b897-9cbd13ab4807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.420 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.424 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:56 localhost kernel: device tap13bbad22-a0 left promiscuous mode Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.434 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.439 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f18d4d21-672e-4efa-afda-f371dae1a342]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.454 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[933960ca-ae1f-40d7-991e-fd951415aef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.455 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a153086b-5bc8-4679-a289-407df64bab54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.472 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9132d26b-d90d-4d80-83a6-3401a01e348f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1205278, 'reachable_time': 33002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311903, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.475 160371 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.475 160371 DEBUG oslo.privsep.daemon [-] privsep: reply[18b0b5e1-651f-4f78-a463-e6f31a86ffd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.476 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c1f0bd46-6bae-4902-9292-e19c6e88557a in datapath 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6 unbound from our chassis#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.480 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:04:56 localhost ovn_metadata_agent[160216]: 2025-12-02 10:04:56.481 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f41b4-72cb-4f4e-a9a2-fbfc6eeb32b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.564 281858 INFO nova.virt.libvirt.driver [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deleting instance files /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.565 281858 INFO nova.virt.libvirt.driver [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deletion of /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del complete#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.610 281858 INFO nova.compute.manager [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.611 281858 DEBUG oslo.service.loopingcall [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.612 281858 DEBUG nova.compute.manager [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 2 05:04:56 localhost nova_compute[281854]: 2025-12-02 10:04:56.612 281858 DEBUG nova.network.neutron [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 2 05:04:56 localhost systemd[1]: tmp-crun.0EJDya.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: var-lib-containers-storage-overlay-717f1ef704ec9c5b6d7c4f85d43274eb21a1cf80205ee2cbc3615610a19b18d1-merged.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa94b1110fea6af2db23700b994e23c475ca73bb6dcc0213f1d1418b60de19a5-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: run-netns-ovnmeta\x2d13bbad22\x2dab61\x2d4b1f\x2d849e\x2dc651aa8f3297.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: var-lib-containers-storage-overlay-5f06f0939af8ced9e01822fd15f35fbfde05ec9e41ca9e0ac345284976c2f364-merged.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8e9f51d75b53fa73cc5ca712ffa936d423cca0867a5796abeddc016534ff980-userdata-shm.mount: Deactivated successfully. Dec 2 05:04:56 localhost systemd[1]: run-netns-ovnmeta\x2dc40d86e4\x2d7101\x2d443b\x2dabce\x2d328f7d1ea40e.mount: Deactivated successfully. Dec 2 05:04:57 localhost nova_compute[281854]: 2025-12-02 10:04:57.308 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:57 localhost systemd[1]: Stopping User Manager for UID 42436... Dec 2 05:04:57 localhost systemd[311212]: Activating special unit Exit the Session... Dec 2 05:04:57 localhost systemd[311212]: Stopped target Main User Target. Dec 2 05:04:57 localhost systemd[311212]: Stopped target Basic System. Dec 2 05:04:57 localhost systemd[311212]: Stopped target Paths. Dec 2 05:04:57 localhost systemd[311212]: Stopped target Sockets. Dec 2 05:04:57 localhost systemd[311212]: Stopped target Timers. Dec 2 05:04:57 localhost systemd[311212]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 2 05:04:57 localhost systemd[311212]: Stopped Daily Cleanup of User's Temporary Directories. Dec 2 05:04:57 localhost systemd[311212]: Closed D-Bus User Message Bus Socket. Dec 2 05:04:57 localhost systemd[311212]: Stopped Create User's Volatile Files and Directories. Dec 2 05:04:57 localhost systemd[311212]: Removed slice User Application Slice. Dec 2 05:04:57 localhost systemd[311212]: Reached target Shutdown. Dec 2 05:04:57 localhost systemd[311212]: Finished Exit the Session. Dec 2 05:04:57 localhost systemd[311212]: Reached target Exit the Session. Dec 2 05:04:57 localhost systemd[1]: user@42436.service: Deactivated successfully. Dec 2 05:04:57 localhost systemd[1]: Stopped User Manager for UID 42436. Dec 2 05:04:57 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Dec 2 05:04:57 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Dec 2 05:04:57 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Dec 2 05:04:57 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Dec 2 05:04:57 localhost systemd[1]: Removed slice User Slice of UID 42436. Dec 2 05:04:57 localhost nova_compute[281854]: 2025-12-02 10:04:57.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:59.190 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:01Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=54433c73-7e5c-481c-b64c-19e9cfd6e56f, ip_allocation=immediate, mac_address=fa:16:3e:bb:b6:1c, name=tempest-parent-146896978, network_id=13bbad22-ab61-4b1f-849e-c651aa8f3297, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=13, security_groups=['576d6513-029b-4880-bb0b-58094b586b90'], standard_attr_id=537, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, trunk_details=sub_ports=[], trunk_id=3bda7a6b-42c4-4395-9870-485919ec4ac2, updated_at=2025-12-02T10:04:58Z on network 13bbad22-ab61-4b1f-849e-c651aa8f3297#033[00m Dec 2 05:04:59 localhost podman[311922]: 2025-12-02 10:04:59.399150517 +0000 UTC m=+0.058632148 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:04:59 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 2 addresses Dec 2 05:04:59 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:04:59 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:04:59 localhost ovn_controller[154505]: 2025-12-02T10:04:59Z|00166|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.480 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:04:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:04:59.607 263406 INFO neutron.agent.dhcp.agent [None req-33be06b8-0323-4e71-9863-e6bafdac769b - - - - - -] DHCP configuration for ports {'54433c73-7e5c-481c-b64c-19e9cfd6e56f'} is completed#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.798 281858 DEBUG nova.network.neutron [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.825 281858 INFO nova.compute.manager [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 3.21 seconds to deallocate network for instance.#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.898 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.898 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:04:59 localhost nova_compute[281854]: 2025-12-02 10:04:59.967 281858 DEBUG oslo_concurrency.processutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.030 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.288 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:00 localhost dnsmasq[311070]: exiting on receipt of SIGTERM Dec 2 05:05:00 localhost podman[311979]: 2025-12-02 10:05:00.30416091 +0000 UTC m=+0.067305982 container kill 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:05:00 localhost systemd[1]: libpod-2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88.scope: Deactivated successfully. Dec 2 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:05:00 localhost neutron_sriov_agent[256494]: 2025-12-02 10:05:00.404 2 INFO neutron.agent.securitygroups_rpc [req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 req-4740c003-3af7-4933-8b00-851aa84e7e55 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:05:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:00 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/557978648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:00 localhost podman[311999]: 2025-12-02 10:05:00.421422066 +0000 UTC m=+0.080368321 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:05:00 localhost podman[311999]: 2025-12-02 10:05:00.426840831 +0000 UTC m=+0.085787066 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.435 281858 DEBUG oslo_concurrency.processutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:00 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:05:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:00.444 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:00Z, description=, device_id=abf8d33c-4e24-4d26-af41-b01c828c67e0, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4, ip_allocation=immediate, mac_address=fa:16:3e:16:9d:c1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:35Z, description=, dns_domain=, id=45d02cf1-f511-4416-b7c1-b37c417f16f9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1627103925-network, port_security_enabled=True, project_id=50df25ee29424615807a458690cdf8d7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33331, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=681, status=ACTIVE, subnets=['34aa8025-e49d-4c09-aefd-41c4d8900224'], tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:04:36Z, vlan_transparent=None, network_id=45d02cf1-f511-4416-b7c1-b37c417f16f9, port_security_enabled=True, project_id=50df25ee29424615807a458690cdf8d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b'], standard_attr_id=757, status=DOWN, tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:05:00Z on network 45d02cf1-f511-4416-b7c1-b37c417f16f9#033[00m Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.444 281858 DEBUG nova.compute.provider_tree [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:05:00 localhost podman[311998]: 2025-12-02 10:05:00.451785028 +0000 UTC m=+0.118352077 container died 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:05:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.478 281858 DEBUG nova.scheduler.client.report [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:05:00 localhost systemd[1]: var-lib-containers-storage-overlay-909ccbf636b56e5fcb70f402308fe6a02f149a317eaed6dc848cd26938534901-merged.mount: Deactivated successfully. Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.502 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:00 localhost podman[312000]: 2025-12-02 10:05:00.516708595 +0000 UTC m=+0.176043450 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.544 281858 INFO nova.scheduler.client.report [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Deleted allocations for instance 82e23ec3-1d57-4166-9ba0-839ded943a78#033[00m Dec 2 05:05:00 localhost podman[312000]: 2025-12-02 10:05:00.561192405 +0000 UTC m=+0.220527190 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:05:00 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:05:00 localhost podman[311998]: 2025-12-02 10:05:00.620383479 +0000 UTC m=+0.286950428 container remove 2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-26a036bb-7fc2-42d0-b324-4cf6bb77a9d6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:00 localhost systemd[1]: libpod-conmon-2dcb997821f4d2734f28b29240adaafbdb32b4d61b868150280126a880924e88.scope: Deactivated successfully. Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.631 281858 DEBUG oslo_concurrency.lockutils [None req-bb096adb-4a93-4648-916e-6da0c1af22b8 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 5.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:00.656 263406 INFO neutron.agent.dhcp.agent [None req-c5a175e5-f595-4217-adbe-0df0f2fc8df8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:00.658 263406 INFO neutron.agent.dhcp.agent [None req-c5a175e5-f595-4217-adbe-0df0f2fc8df8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:00 localhost podman[312081]: 2025-12-02 10:05:00.768709897 +0000 UTC m=+0.062998676 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:05:00 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 2 addresses Dec 2 05:05:00 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:05:00 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:00 localhost nova_compute[281854]: 2025-12-02 10:05:00.936 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:01.056 263406 INFO neutron.agent.dhcp.agent [None req-4fe268aa-7bb5-4bb8-af41-38d54c225599 - - - - - -] DHCP configuration for ports {'a0a73e76-685f-4ba0-87b5-5dd27b54fab4'} is completed#033[00m Dec 2 05:05:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:01.170 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005541914.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:00Z, description=, device_id=abf8d33c-4e24-4d26-af41-b01c828c67e0, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com, extra_dhcp_opts=[], fixed_ips=[], id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4, ip_allocation=immediate, mac_address=fa:16:3e:16:9d:c1, name=, network_id=45d02cf1-f511-4416-b7c1-b37c417f16f9, port_security_enabled=True, project_id=50df25ee29424615807a458690cdf8d7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b'], standard_attr_id=757, status=DOWN, tags=[], tenant_id=50df25ee29424615807a458690cdf8d7, updated_at=2025-12-02T10:05:00Z on network 45d02cf1-f511-4416-b7c1-b37c417f16f9#033[00m Dec 2 05:05:01 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 2 addresses Dec 2 05:05:01 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:05:01 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:05:01 localhost podman[312119]: 2025-12-02 10:05:01.409857169 +0000 UTC m=+0.077431962 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:01 localhost systemd[1]: run-netns-qdhcp\x2d26a036bb\x2d7fc2\x2d42d0\x2db324\x2d4cf6bb77a9d6.mount: Deactivated successfully. Dec 2 05:05:01 localhost neutron_sriov_agent[256494]: 2025-12-02 10:05:01.531 2 INFO neutron.agent.securitygroups_rpc [None req-9eec1e00-2947-423e-88e8-b2e4c78afea0 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:05:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:01.588 263406 INFO neutron.agent.dhcp.agent [None req-c4a472ea-6649-45d6-bbcf-cb7850575d3a - - - - - -] DHCP configuration for ports {'a0a73e76-685f-4ba0-87b5-5dd27b54fab4'} is completed#033[00m Dec 2 05:05:01 localhost nova_compute[281854]: 2025-12-02 10:05:01.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e109 e109: 6 total, 6 up, 6 in Dec 2 05:05:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:03.049 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:03.049 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:03.050 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:04 localhost openstack_network_exporter[242845]: ERROR 10:05:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:05:04 localhost openstack_network_exporter[242845]: ERROR 10:05:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:04 localhost openstack_network_exporter[242845]: ERROR 10:05:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:04 localhost openstack_network_exporter[242845]: ERROR 10:05:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:05:04 localhost openstack_network_exporter[242845]: Dec 2 05:05:04 localhost openstack_network_exporter[242845]: ERROR 10:05:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:05:04 localhost openstack_network_exporter[242845]: Dec 2 05:05:04 localhost ovn_controller[154505]: 2025-12-02T10:05:04Z|00167|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:04 localhost nova_compute[281854]: 2025-12-02 10:05:04.984 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.328 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:05.532 263406 INFO neutron.agent.linux.ip_lib [None req-29d225a6-dc6b-4286-ae5e-3b8927ee1c5b - - - - - -] Device tapbf5295be-03 cannot be used as it has no MAC address#033[00m Dec 2 05:05:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:05:05.551 2 INFO neutron.agent.securitygroups_rpc [None req-7954669c-1491-4ccc-a463-0efe07ba8bc3 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']#033[00m Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.554 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost kernel: device tapbf5295be-03 entered promiscuous mode Dec 2 05:05:05 localhost ovn_controller[154505]: 2025-12-02T10:05:05Z|00168|binding|INFO|Claiming lport bf5295be-0321-4f82-8125-4c1394da80db for this chassis. Dec 2 05:05:05 localhost ovn_controller[154505]: 2025-12-02T10:05:05Z|00169|binding|INFO|bf5295be-0321-4f82-8125-4c1394da80db: Claiming unknown Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.561 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost NetworkManager[5965]: [1764669905.5625] manager: (tapbf5295be-03): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Dec 2 05:05:05 localhost systemd-udevd[312151]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:05.571 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-a0d374a1-1751-4f10-b3b2-966d56e45d4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d374a1-1751-4f10-b3b2-966d56e45d4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29134a5a6b554e34bc1729ff0e939209', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f78d00a-923c-4dce-8ef1-798cd9b95762, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bf5295be-0321-4f82-8125-4c1394da80db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:05.572 160221 INFO neutron.agent.ovn.metadata.agent [-] Port bf5295be-0321-4f82-8125-4c1394da80db in datapath a0d374a1-1751-4f10-b3b2-966d56e45d4e bound to our chassis#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:05.574 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a0d374a1-1751-4f10-b3b2-966d56e45d4e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:05:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:05.575 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5af1dbe3-70e8-4bd6-b5c4-d6990bc1c41e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:05 localhost ovn_controller[154505]: 2025-12-02T10:05:05Z|00170|binding|INFO|Setting lport bf5295be-0321-4f82-8125-4c1394da80db ovn-installed in OVS Dec 2 05:05:05 localhost ovn_controller[154505]: 2025-12-02T10:05:05Z|00171|binding|INFO|Setting lport bf5295be-0321-4f82-8125-4c1394da80db up in Southbound Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.601 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.631 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.654 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:05 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 1 addresses Dec 2 05:05:05 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:05:05 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:05:05 localhost podman[312180]: 2025-12-02 10:05:05.750141064 +0000 UTC m=+0.055272900 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:05:05 localhost nova_compute[281854]: 2025-12-02 10:05:05.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:06 localhost podman[240799]: time="2025-12-02T10:05:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:05:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:06 localhost podman[240799]: @ - - [02/Dec/2025:10:05:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159754 "" "Go-http-client/1.1" Dec 2 05:05:06 localhost podman[240799]: @ - - [02/Dec/2025:10:05:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20200 "" "Go-http-client/1.1" Dec 2 05:05:06 localhost podman[312243]: Dec 2 05:05:06 localhost podman[312243]: 2025-12-02 10:05:06.520460272 +0000 UTC m=+0.119550520 container create e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:06 localhost systemd[1]: Started libpod-conmon-e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336.scope. Dec 2 05:05:06 localhost podman[312243]: 2025-12-02 10:05:06.457969439 +0000 UTC m=+0.057059767 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:05:06 localhost systemd[1]: Started libcrun container. Dec 2 05:05:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e7a871c6a7c5e20892b6d31b8714daf06c35d9aa0dc6eaa8b0680c07851460a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:05:06 localhost podman[312243]: 2025-12-02 10:05:06.578966336 +0000 UTC m=+0.178056594 container init e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:05:06 localhost podman[312243]: 2025-12-02 10:05:06.586306743 +0000 UTC m=+0.185396991 container start e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:06 localhost dnsmasq[312260]: started, version 2.85 cachesize 150 Dec 2 05:05:06 localhost dnsmasq[312260]: DNS service limited to local subnets Dec 2 05:05:06 localhost dnsmasq[312260]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:05:06 localhost dnsmasq[312260]: warning: no upstream servers configured Dec 2 05:05:06 localhost dnsmasq-dhcp[312260]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:05:06 localhost dnsmasq[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/addn_hosts - 0 addresses Dec 2 05:05:06 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/host Dec 2 05:05:06 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/opts Dec 2 05:05:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e110 e110: 6 total, 6 up, 6 in Dec 2 05:05:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:07.672 263406 INFO neutron.agent.dhcp.agent [None req-dfef2434-02c5-45c2-b720-09a76b78cc03 - - - - - -] DHCP configuration for ports {'5043eaee-7a67-478c-bc16-9e2356ef58c3'} is completed#033[00m Dec 2 05:05:07 localhost dnsmasq[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/addn_hosts - 0 addresses Dec 2 05:05:07 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/host Dec 2 05:05:07 localhost dnsmasq-dhcp[308473]: read /var/lib/neutron/dhcp/13bbad22-ab61-4b1f-849e-c651aa8f3297/opts Dec 2 05:05:07 localhost podman[312276]: 2025-12-02 10:05:07.856712061 +0000 UTC m=+0.072179272 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:05:09 localhost ovn_controller[154505]: 2025-12-02T10:05:09Z|00172|binding|INFO|Releasing lport c4946b01-0395-4a62-9a39-4286d5803bca from this chassis (sb_readonly=0) Dec 2 05:05:09 localhost kernel: device tapc4946b01-03 left promiscuous mode Dec 2 05:05:09 localhost nova_compute[281854]: 2025-12-02 10:05:09.589 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:09 localhost ovn_controller[154505]: 2025-12-02T10:05:09Z|00173|binding|INFO|Setting lport c4946b01-0395-4a62-9a39-4286d5803bca down in Southbound Dec 2 05:05:09 localhost nova_compute[281854]: 2025-12-02 10:05:09.607 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:09.954 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c4946b01-0395-4a62-9a39-4286d5803bca) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:09.956 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c4946b01-0395-4a62-9a39-4286d5803bca in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis#033[00m Dec 2 05:05:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:09.958 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:09.959 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ec0b4b-16ff-4dfa-b664-35df202bb32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:10 localhost nova_compute[281854]: 2025-12-02 10:05:10.332 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:10 localhost nova_compute[281854]: 2025-12-02 10:05:10.866 281858 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 2 05:05:10 localhost nova_compute[281854]: 2025-12-02 10:05:10.867 281858 INFO nova.compute.manager [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Stopped (Lifecycle Event)#033[00m Dec 2 05:05:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 e111: 6 total, 6 up, 6 in Dec 2 05:05:10 localhost nova_compute[281854]: 2025-12-02 10:05:10.940 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:10 localhost nova_compute[281854]: 2025-12-02 10:05:10.956 281858 DEBUG nova.compute.manager [None req-91b579d9-7b1c-461e-afee-58fa95f1d2e6 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 2 05:05:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:05:13 localhost podman[312298]: 2025-12-02 10:05:13.434317118 +0000 UTC m=+0.077102714 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:05:13 localhost podman[312298]: 2025-12-02 10:05:13.477117313 +0000 UTC m=+0.119902879 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:05:13 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:05:15 localhost nova_compute[281854]: 2025-12-02 10:05:15.381 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:15 localhost nova_compute[281854]: 2025-12-02 10:05:15.942 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:15 localhost ovn_controller[154505]: 2025-12-02T10:05:15Z|00174|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:15 localhost nova_compute[281854]: 2025-12-02 10:05:15.997 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:16 localhost nova_compute[281854]: 2025-12-02 10:05:16.016 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:05:16 localhost podman[312317]: 2025-12-02 10:05:16.455945315 +0000 UTC m=+0.088719464 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:05:16 localhost podman[312317]: 2025-12-02 10:05:16.464985197 +0000 UTC m=+0.097759286 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:05:16 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:05:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:16.626 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:16Z, description=, device_id=1ad64abe-8977-48b7-83a3-2b942dce5ba9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3e752f51-4af7-48aa-9e8c-d2eedb1ead78, ip_allocation=immediate, mac_address=fa:16:3e:0d:96:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:01Z, description=, dns_domain=, id=a0d374a1-1751-4f10-b3b2-966d56e45d4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-911438573-network, port_security_enabled=True, project_id=29134a5a6b554e34bc1729ff0e939209, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5631, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=763, status=ACTIVE, subnets=['59ecd7aa-b77a-48c2-b3b9-401ba5c9b11b'], tags=[], tenant_id=29134a5a6b554e34bc1729ff0e939209, updated_at=2025-12-02T10:05:03Z, vlan_transparent=None, network_id=a0d374a1-1751-4f10-b3b2-966d56e45d4e, port_security_enabled=False, project_id=29134a5a6b554e34bc1729ff0e939209, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=808, status=DOWN, tags=[], tenant_id=29134a5a6b554e34bc1729ff0e939209, updated_at=2025-12-02T10:05:16Z on network a0d374a1-1751-4f10-b3b2-966d56e45d4e#033[00m Dec 2 05:05:16 localhost dnsmasq[308473]: exiting on receipt of SIGTERM Dec 2 05:05:16 localhost podman[312352]: 2025-12-02 10:05:16.630918077 +0000 UTC m=+0.045672254 container kill 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:16 localhost systemd[1]: libpod-77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7.scope: Deactivated successfully. Dec 2 05:05:16 localhost podman[312367]: 2025-12-02 10:05:16.681561931 +0000 UTC m=+0.035426729 container died 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:16 localhost podman[312367]: 2025-12-02 10:05:16.723509204 +0000 UTC m=+0.077374002 container remove 77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:16 localhost systemd[1]: libpod-conmon-77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7.scope: Deactivated successfully. Dec 2 05:05:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:16.758 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:16 localhost dnsmasq[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/addn_hosts - 1 addresses Dec 2 05:05:16 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/host Dec 2 05:05:16 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/opts Dec 2 05:05:16 localhost podman[312409]: 2025-12-02 10:05:16.848825156 +0000 UTC m=+0.055047723 container kill e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:17.068 263406 INFO neutron.agent.dhcp.agent [None req-5728adaf-47c6-4148-b907-d1c645ccb334 - - - - - -] DHCP configuration for ports {'3e752f51-4af7-48aa-9e8c-d2eedb1ead78'} is completed#033[00m Dec 2 05:05:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:17.110 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:17 localhost systemd[1]: var-lib-containers-storage-overlay-896dba9b1a38f0638159f863e9536c69068bcbb89b8facb5a357e5a5dc8cf960-merged.mount: Deactivated successfully. Dec 2 05:05:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77037373fbf82d7f180f8f44af5375c4189d52dcc7de8304c6ea7370610e44f7-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:17 localhost systemd[1]: run-netns-qdhcp\x2d13bbad22\x2dab61\x2d4b1f\x2d849e\x2dc651aa8f3297.mount: Deactivated successfully. Dec 2 05:05:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:17.717 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:16Z, description=, device_id=1ad64abe-8977-48b7-83a3-2b942dce5ba9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3e752f51-4af7-48aa-9e8c-d2eedb1ead78, ip_allocation=immediate, mac_address=fa:16:3e:0d:96:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:01Z, description=, dns_domain=, id=a0d374a1-1751-4f10-b3b2-966d56e45d4e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-911438573-network, port_security_enabled=True, project_id=29134a5a6b554e34bc1729ff0e939209, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5631, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=763, status=ACTIVE, subnets=['59ecd7aa-b77a-48c2-b3b9-401ba5c9b11b'], tags=[], tenant_id=29134a5a6b554e34bc1729ff0e939209, updated_at=2025-12-02T10:05:03Z, vlan_transparent=None, network_id=a0d374a1-1751-4f10-b3b2-966d56e45d4e, port_security_enabled=False, project_id=29134a5a6b554e34bc1729ff0e939209, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=808, status=DOWN, tags=[], tenant_id=29134a5a6b554e34bc1729ff0e939209, updated_at=2025-12-02T10:05:16Z on network a0d374a1-1751-4f10-b3b2-966d56e45d4e#033[00m Dec 2 05:05:17 localhost dnsmasq[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/addn_hosts - 1 addresses Dec 2 05:05:17 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/host Dec 2 05:05:17 localhost podman[312446]: 2025-12-02 10:05:17.944713364 +0000 UTC m=+0.059475902 container kill e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:05:17 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/opts Dec 2 05:05:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:18.262 263406 INFO neutron.agent.dhcp.agent [None req-f6ea4edc-e8e9-4ae3-b79c-4484998561d8 - - - - - -] DHCP configuration for ports {'3e752f51-4af7-48aa-9e8c-d2eedb1ead78'} is completed#033[00m Dec 2 05:05:18 localhost podman[312484]: 2025-12-02 10:05:18.657705839 +0000 UTC m=+0.044628515 container kill 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:05:18 localhost dnsmasq[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/addn_hosts - 0 addresses Dec 2 05:05:18 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/host Dec 2 05:05:18 localhost dnsmasq-dhcp[309469]: read /var/lib/neutron/dhcp/97ae066a-ecdb-4d1f-a021-787e342a02a4/opts Dec 2 05:05:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:05:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:05:18 localhost podman[312499]: 2025-12-02 10:05:18.748200829 +0000 UTC m=+0.059903853 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:05:18 localhost podman[312499]: 2025-12-02 10:05:18.756887233 +0000 UTC m=+0.068590287 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:05:18 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:05:18 localhost podman[312498]: 2025-12-02 10:05:18.807316552 +0000 UTC m=+0.123751573 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:05:18 localhost ovn_controller[154505]: 2025-12-02T10:05:18Z|00175|binding|INFO|Releasing lport ae9b1151-5912-406f-ae7b-9db37b471685 from this chassis (sb_readonly=0) Dec 2 05:05:18 localhost nova_compute[281854]: 2025-12-02 10:05:18.815 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:18 localhost kernel: device tapae9b1151-59 left promiscuous mode Dec 2 05:05:18 localhost ovn_controller[154505]: 2025-12-02T10:05:18Z|00176|binding|INFO|Setting lport ae9b1151-5912-406f-ae7b-9db37b471685 down in Southbound Dec 2 05:05:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:18.825 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-97ae066a-ecdb-4d1f-a021-787e342a02a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97ae066a-ecdb-4d1f-a021-787e342a02a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc1edab5ae5d43f08b967b5bf594f8b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5764aa57-a87d-4e3f-89b1-49a48ee4f883, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ae9b1151-5912-406f-ae7b-9db37b471685) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:18.826 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ae9b1151-5912-406f-ae7b-9db37b471685 in datapath 97ae066a-ecdb-4d1f-a021-787e342a02a4 unbound from our chassis#033[00m Dec 2 05:05:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:18.827 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97ae066a-ecdb-4d1f-a021-787e342a02a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:18.828 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fc11fd53-a573-48a4-a2c8-ff9af4920ac8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:18 localhost nova_compute[281854]: 2025-12-02 10:05:18.837 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:18 localhost podman[312498]: 2025-12-02 10:05:18.847949548 +0000 UTC m=+0.164384569 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm) Dec 2 05:05:18 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:05:19 localhost systemd[1]: tmp-crun.V4gevc.mount: Deactivated successfully. Dec 2 05:05:20 localhost nova_compute[281854]: 2025-12-02 10:05:20.423 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:20 localhost ovn_controller[154505]: 2025-12-02T10:05:20Z|00177|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:20 localhost nova_compute[281854]: 2025-12-02 10:05:20.591 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:20 localhost nova_compute[281854]: 2025-12-02 10:05:20.946 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:21 localhost dnsmasq[309469]: exiting on receipt of SIGTERM Dec 2 05:05:21 localhost podman[312649]: 2025-12-02 10:05:21.343260676 +0000 UTC m=+0.043876956 container kill 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 05:05:21 localhost systemd[1]: libpod-2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d.scope: Deactivated successfully. Dec 2 05:05:21 localhost podman[312662]: 2025-12-02 10:05:21.38567734 +0000 UTC m=+0.032429579 container died 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:05:21 localhost systemd[1]: tmp-crun.bFFykn.mount: Deactivated successfully. Dec 2 05:05:21 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:05:21 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:05:21 localhost podman[312662]: 2025-12-02 10:05:21.482757558 +0000 UTC m=+0.129509817 container cleanup 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:21 localhost systemd[1]: libpod-conmon-2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d.scope: Deactivated successfully. Dec 2 05:05:21 localhost podman[312668]: 2025-12-02 10:05:21.507129789 +0000 UTC m=+0.140615183 container remove 2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-97ae066a-ecdb-4d1f-a021-787e342a02a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:05:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:21.703 263406 INFO neutron.agent.dhcp.agent [None req-8256c916-4369-449f-98f7-d3a76b9976e7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:21.704 263406 INFO neutron.agent.dhcp.agent [None req-8256c916-4369-449f-98f7-d3a76b9976e7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:21.812 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:22 localhost systemd[1]: var-lib-containers-storage-overlay-f69386878a877368468586813b3dbb1937ee49b0390efbec5dd7e4f609902381-merged.mount: Deactivated successfully. Dec 2 05:05:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c8dc5a4dbc8911ab6f2c075727c856467cd80206751f8c06727935126920b4d-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:22 localhost systemd[1]: run-netns-qdhcp\x2d97ae066a\x2decdb\x2d4d1f\x2da021\x2d787e342a02a4.mount: Deactivated successfully. Dec 2 05:05:22 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:05:25 localhost nova_compute[281854]: 2025-12-02 10:05:25.450 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:25 localhost nova_compute[281854]: 2025-12-02 10:05:25.948 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:05:26 localhost systemd[1]: tmp-crun.lY4SDO.mount: Deactivated successfully. Dec 2 05:05:26 localhost podman[312691]: 2025-12-02 10:05:26.454593477 +0000 UTC m=+0.091993661 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:05:26 localhost podman[312691]: 2025-12-02 10:05:26.489990954 +0000 UTC m=+0.127391178 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:05:26 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:05:26 localhost dnsmasq[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/addn_hosts - 0 addresses Dec 2 05:05:26 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/host Dec 2 05:05:26 localhost podman[312727]: 2025-12-02 10:05:26.64570782 +0000 UTC m=+0.081228764 container kill e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:05:26 localhost dnsmasq-dhcp[312260]: read /var/lib/neutron/dhcp/a0d374a1-1751-4f10-b3b2-966d56e45d4e/opts Dec 2 05:05:26 localhost ovn_controller[154505]: 2025-12-02T10:05:26Z|00178|binding|INFO|Releasing lport bf5295be-0321-4f82-8125-4c1394da80db from this chassis (sb_readonly=0) Dec 2 05:05:26 localhost kernel: device tapbf5295be-03 left promiscuous mode Dec 2 05:05:26 localhost nova_compute[281854]: 2025-12-02 10:05:26.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:26 localhost ovn_controller[154505]: 2025-12-02T10:05:26Z|00179|binding|INFO|Setting lport bf5295be-0321-4f82-8125-4c1394da80db down in Southbound Dec 2 05:05:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:26.826 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-a0d374a1-1751-4f10-b3b2-966d56e45d4e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d374a1-1751-4f10-b3b2-966d56e45d4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29134a5a6b554e34bc1729ff0e939209', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f78d00a-923c-4dce-8ef1-798cd9b95762, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bf5295be-0321-4f82-8125-4c1394da80db) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:26.828 160221 INFO neutron.agent.ovn.metadata.agent [-] Port bf5295be-0321-4f82-8125-4c1394da80db in datapath a0d374a1-1751-4f10-b3b2-966d56e45d4e unbound from our chassis#033[00m Dec 2 05:05:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:26.831 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d374a1-1751-4f10-b3b2-966d56e45d4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:26.832 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[53805bbe-de4e-469c-bb25-27c74aff5dae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:26 localhost nova_compute[281854]: 2025-12-02 10:05:26.845 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:28 localhost ovn_controller[154505]: 2025-12-02T10:05:28Z|00180|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:28 localhost nova_compute[281854]: 2025-12-02 10:05:28.146 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:28 localhost systemd[1]: tmp-crun.tbw36N.mount: Deactivated successfully. Dec 2 05:05:28 localhost dnsmasq[312260]: exiting on receipt of SIGTERM Dec 2 05:05:28 localhost podman[312766]: 2025-12-02 10:05:28.633870469 +0000 UTC m=+0.071597086 container kill e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:05:28 localhost systemd[1]: libpod-e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336.scope: Deactivated successfully. Dec 2 05:05:28 localhost podman[312778]: 2025-12-02 10:05:28.69334577 +0000 UTC m=+0.048926570 container died e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:28 localhost systemd[1]: tmp-crun.qHM4Gf.mount: Deactivated successfully. Dec 2 05:05:28 localhost podman[312778]: 2025-12-02 10:05:28.785291529 +0000 UTC m=+0.140872329 container cleanup e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:05:28 localhost systemd[1]: libpod-conmon-e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336.scope: Deactivated successfully. Dec 2 05:05:28 localhost podman[312785]: 2025-12-02 10:05:28.811650025 +0000 UTC m=+0.153414935 container remove e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a0d374a1-1751-4f10-b3b2-966d56e45d4e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:05:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:28.877 263406 INFO neutron.agent.dhcp.agent [None req-6b265581-9cc9-418d-afd1-98a43a0bdba5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:28.877 263406 INFO neutron.agent.dhcp.agent [None req-6b265581-9cc9-418d-afd1-98a43a0bdba5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:29 localhost systemd[1]: var-lib-containers-storage-overlay-7e7a871c6a7c5e20892b6d31b8714daf06c35d9aa0dc6eaa8b0680c07851460a-merged.mount: Deactivated successfully. Dec 2 05:05:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e475fa1f3bd5e1e1c4c802a0246d97debe1749e5a828608645200f034bb18336-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:29 localhost systemd[1]: run-netns-qdhcp\x2da0d374a1\x2d1751\x2d4f10\x2db3b2\x2d966d56e45d4e.mount: Deactivated successfully. Dec 2 05:05:30 localhost nova_compute[281854]: 2025-12-02 10:05:30.483 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:30 localhost nova_compute[281854]: 2025-12-02 10:05:30.950 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:05:31 localhost podman[312808]: 2025-12-02 10:05:31.431976935 +0000 UTC m=+0.064193158 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS) Dec 2 05:05:31 localhost systemd[1]: tmp-crun.eYTp05.mount: Deactivated successfully. Dec 2 05:05:31 localhost podman[312807]: 2025-12-02 10:05:31.49945043 +0000 UTC m=+0.131347595 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:05:31 localhost podman[312807]: 2025-12-02 10:05:31.50689358 +0000 UTC m=+0.138790795 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:05:31 localhost podman[312808]: 2025-12-02 10:05:31.517832743 +0000 UTC m=+0.150048886 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:05:31 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:05:31 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:05:32 localhost neutron_sriov_agent[256494]: 2025-12-02 10:05:32.559 2 INFO neutron.agent.securitygroups_rpc [req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 req-ec3b1f9e-8373-4159-93fc-d4de0998f605 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']#033[00m Dec 2 05:05:32 localhost systemd[1]: tmp-crun.ANUCOB.mount: Deactivated successfully. Dec 2 05:05:32 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 1 addresses Dec 2 05:05:32 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:05:32 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:05:32 localhost podman[312872]: 2025-12-02 10:05:32.834762434 +0000 UTC m=+0.072334876 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:05:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:33.675 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:33 localhost nova_compute[281854]: 2025-12-02 10:05:33.675 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:33.678 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:05:34 localhost openstack_network_exporter[242845]: ERROR 10:05:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:05:34 localhost openstack_network_exporter[242845]: ERROR 10:05:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:34 localhost openstack_network_exporter[242845]: ERROR 10:05:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:05:34 localhost openstack_network_exporter[242845]: ERROR 10:05:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:05:34 localhost openstack_network_exporter[242845]: Dec 2 05:05:34 localhost openstack_network_exporter[242845]: ERROR 10:05:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:05:34 localhost openstack_network_exporter[242845]: Dec 2 05:05:35 localhost dnsmasq[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/addn_hosts - 0 addresses Dec 2 05:05:35 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/host Dec 2 05:05:35 localhost dnsmasq-dhcp[311147]: read /var/lib/neutron/dhcp/45d02cf1-f511-4416-b7c1-b37c417f16f9/opts Dec 2 05:05:35 localhost podman[312909]: 2025-12-02 10:05:35.444210944 +0000 UTC m=+0.059157874 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:05:35 localhost nova_compute[281854]: 2025-12-02 10:05:35.539 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:35 localhost ovn_controller[154505]: 2025-12-02T10:05:35Z|00181|binding|INFO|Releasing lport bd990115-9909-4e4e-a861-f26c2f53a28c from this chassis (sb_readonly=0) Dec 2 05:05:35 localhost kernel: device tapbd990115-99 left promiscuous mode Dec 2 05:05:35 localhost nova_compute[281854]: 2025-12-02 10:05:35.627 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:35 localhost ovn_controller[154505]: 2025-12-02T10:05:35Z|00182|binding|INFO|Setting lport bd990115-9909-4e4e-a861-f26c2f53a28c down in Southbound Dec 2 05:05:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:35.636 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bd990115-9909-4e4e-a861-f26c2f53a28c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:35.638 160221 INFO neutron.agent.ovn.metadata.agent [-] Port bd990115-9909-4e4e-a861-f26c2f53a28c in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 unbound from our chassis#033[00m Dec 2 05:05:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:35.640 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45d02cf1-f511-4416-b7c1-b37c417f16f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:35.641 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9e23b0-a6ec-4a2f-8eae-7f723f2602a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:35 localhost nova_compute[281854]: 2025-12-02 10:05:35.649 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:35 localhost nova_compute[281854]: 2025-12-02 10:05:35.952 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:36 localhost podman[240799]: time="2025-12-02T10:05:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:05:36 localhost podman[240799]: @ - - [02/Dec/2025:10:05:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:05:36 localhost podman[240799]: @ - - [02/Dec/2025:10:05:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19231 "" "Go-http-client/1.1" Dec 2 05:05:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:37 localhost ovn_controller[154505]: 2025-12-02T10:05:37Z|00183|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:37 localhost nova_compute[281854]: 2025-12-02 10:05:37.648 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:37.681 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:05:38 localhost dnsmasq[311147]: exiting on receipt of SIGTERM Dec 2 05:05:38 localhost podman[312948]: 2025-12-02 10:05:38.394872875 +0000 UTC m=+0.068783922 container kill 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:38 localhost systemd[1]: libpod-5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d.scope: Deactivated successfully. Dec 2 05:05:38 localhost podman[312970]: 2025-12-02 10:05:38.459666248 +0000 UTC m=+0.041517022 container died 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:05:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:38 localhost systemd[1]: var-lib-containers-storage-overlay-94147dbae9838956e714723c867733a25e47b3b6162526a89da5f485c251bb56-merged.mount: Deactivated successfully. Dec 2 05:05:38 localhost podman[312970]: 2025-12-02 10:05:38.506128011 +0000 UTC m=+0.087978795 container remove 5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45d02cf1-f511-4416-b7c1-b37c417f16f9, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:05:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:38.545 263406 INFO neutron.agent.dhcp.agent [None req-87b7f267-6b80-4d43-a122-dd4b29ac1e77 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:38 localhost systemd[1]: run-netns-qdhcp\x2d45d02cf1\x2df511\x2d4416\x2db7c1\x2db37c417f16f9.mount: Deactivated successfully. Dec 2 05:05:38 localhost systemd[1]: libpod-conmon-5965ec520471163aaf8447e0aa55f7487dc0d208bec7096828d6c940d9f6539d.scope: Deactivated successfully. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.141525) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939141654, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2503, "num_deletes": 256, "total_data_size": 3268334, "memory_usage": 3316448, "flush_reason": "Manual Compaction"} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939160532, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2111274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20131, "largest_seqno": 22629, "table_properties": {"data_size": 2102355, "index_size": 5489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19897, "raw_average_key_size": 21, "raw_value_size": 2083908, "raw_average_value_size": 2205, "num_data_blocks": 241, "num_entries": 945, "num_filter_entries": 945, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669769, "oldest_key_time": 1764669769, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 19113 microseconds, and 7582 cpu microseconds. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.160593) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2111274 bytes OK Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.160671) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.162836) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.162856) EVENT_LOG_v1 {"time_micros": 1764669939162850, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.162879) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3257109, prev total WAL file size 3257109, number of live WAL files 2. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.163935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2061KB)], [33(16MB)] Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939163981, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 19278738, "oldest_snapshot_seqno": -1} Dec 2 05:05:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:39.224 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12430 keys, 16638511 bytes, temperature: kUnknown Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939253848, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 16638511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16567485, "index_size": 38861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 333149, "raw_average_key_size": 26, "raw_value_size": 16355449, "raw_average_value_size": 1315, "num_data_blocks": 1480, "num_entries": 12430, "num_filter_entries": 12430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.254171) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 16638511 bytes Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.255661) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.2 rd, 184.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 16.4 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(17.0) write-amplify(7.9) OK, records in: 12959, records dropped: 529 output_compression: NoCompression Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.255682) EVENT_LOG_v1 {"time_micros": 1764669939255673, "job": 18, "event": "compaction_finished", "compaction_time_micros": 89989, "compaction_time_cpu_micros": 45970, "output_level": 6, "num_output_files": 1, "total_output_size": 16638511, "num_input_records": 12959, "num_output_records": 12430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939256006, "job": 18, "event": "table_file_deletion", "file_number": 35} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939257748, "job": 18, "event": "table_file_deletion", "file_number": 33} Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.163834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.257855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.257864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.257867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.257869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:39 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:05:39.257872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:05:40 localhost nova_compute[281854]: 2025-12-02 10:05:40.575 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:40 localhost nova_compute[281854]: 2025-12-02 10:05:40.955 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:42.820 263406 INFO neutron.agent.linux.ip_lib [None req-6cfb1242-45dd-447f-a6fc-1acdb666be60 - - - - - -] Device tapf7f7d342-f4 cannot be used as it has no MAC address#033[00m Dec 2 05:05:42 localhost nova_compute[281854]: 2025-12-02 10:05:42.887 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:42 localhost kernel: device tapf7f7d342-f4 entered promiscuous mode Dec 2 05:05:42 localhost NetworkManager[5965]: [1764669942.8984] manager: (tapf7f7d342-f4): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Dec 2 05:05:42 localhost ovn_controller[154505]: 2025-12-02T10:05:42Z|00184|binding|INFO|Claiming lport f7f7d342-f447-418a-b17f-543c0a6fb6f4 for this chassis. Dec 2 05:05:42 localhost ovn_controller[154505]: 2025-12-02T10:05:42Z|00185|binding|INFO|f7f7d342-f447-418a-b17f-543c0a6fb6f4: Claiming unknown Dec 2 05:05:42 localhost nova_compute[281854]: 2025-12-02 10:05:42.898 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:42 localhost systemd-udevd[313000]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost ovn_controller[154505]: 2025-12-02T10:05:42Z|00186|binding|INFO|Setting lport f7f7d342-f447-418a-b17f-543c0a6fb6f4 ovn-installed in OVS Dec 2 05:05:42 localhost nova_compute[281854]: 2025-12-02 10:05:42.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost journal[230136]: ethtool ioctl error on tapf7f7d342-f4: No such device Dec 2 05:05:42 localhost nova_compute[281854]: 2025-12-02 10:05:42.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:43.003 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-dc1b6fff-63f9-4fbd-b22d-9d87141c4454', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc1b6fff-63f9-4fbd-b22d-9d87141c4454', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1403e5f6-3958-4f2a-b5a7-a41f1931563b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7f7d342-f447-418a-b17f-543c0a6fb6f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:43.006 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f7f7d342-f447-418a-b17f-543c0a6fb6f4 in datapath dc1b6fff-63f9-4fbd-b22d-9d87141c4454 bound to our chassis#033[00m Dec 2 05:05:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:43.007 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dc1b6fff-63f9-4fbd-b22d-9d87141c4454 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:05:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:43.009 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb991b3-e045-47ab-84ee-c4c5a9a62ee3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:43 localhost ovn_controller[154505]: 2025-12-02T10:05:43Z|00187|binding|INFO|Setting lport f7f7d342-f447-418a-b17f-543c0a6fb6f4 up in Southbound Dec 2 05:05:43 localhost nova_compute[281854]: 2025-12-02 10:05:43.020 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:43 localhost podman[313071]: Dec 2 05:05:43 localhost podman[313071]: 2025-12-02 10:05:43.874349555 +0000 UTC m=+0.093321368 container create 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:05:43 localhost systemd[1]: Started libpod-conmon-3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14.scope. Dec 2 05:05:43 localhost podman[313071]: 2025-12-02 10:05:43.831743115 +0000 UTC m=+0.050714938 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:05:43 localhost systemd[1]: Started libcrun container. Dec 2 05:05:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b9a5477488579bfa771c21dc4987fd440d0aad6300b05f1bb3cfe33ee68a44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:05:43 localhost podman[313071]: 2025-12-02 10:05:43.97206751 +0000 UTC m=+0.191039313 container init 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:05:43 localhost podman[313071]: 2025-12-02 10:05:43.981130772 +0000 UTC m=+0.200102575 container start 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:43 localhost dnsmasq[313101]: started, version 2.85 cachesize 150 Dec 2 05:05:43 localhost dnsmasq[313101]: DNS service limited to local subnets Dec 2 05:05:43 localhost dnsmasq[313101]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:05:43 localhost dnsmasq[313101]: warning: no upstream servers configured Dec 2 05:05:43 localhost dnsmasq-dhcp[313101]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:05:43 localhost dnsmasq[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/addn_hosts - 0 addresses Dec 2 05:05:43 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/host Dec 2 05:05:43 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/opts Dec 2 05:05:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:44.050 263406 INFO neutron.agent.dhcp.agent [None req-6cfb1242-45dd-447f-a6fc-1acdb666be60 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:42Z, description=, device_id=0221007c-3d7c-420b-901d-7f4f12bcb06b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8544d541-fb07-4eb3-912b-88ac82079cb6, ip_allocation=immediate, mac_address=fa:16:3e:e4:27:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:38Z, description=, dns_domain=, id=dc1b6fff-63f9-4fbd-b22d-9d87141c4454, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1809554588, port_security_enabled=True, project_id=91b4824d03bd43c4aca137037a18bd3d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1956, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=930, status=ACTIVE, subnets=['c6378a19-844e-42c3-ad47-9aacfa4b56e5'], tags=[], tenant_id=91b4824d03bd43c4aca137037a18bd3d, updated_at=2025-12-02T10:05:41Z, vlan_transparent=None, network_id=dc1b6fff-63f9-4fbd-b22d-9d87141c4454, port_security_enabled=False, project_id=91b4824d03bd43c4aca137037a18bd3d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=955, status=DOWN, tags=[], tenant_id=91b4824d03bd43c4aca137037a18bd3d, updated_at=2025-12-02T10:05:42Z on network dc1b6fff-63f9-4fbd-b22d-9d87141c4454#033[00m Dec 2 05:05:44 localhost podman[313086]: 2025-12-02 10:05:44.06631613 +0000 UTC m=+0.139691958 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:05:44 localhost podman[313086]: 2025-12-02 10:05:44.082109553 +0000 UTC m=+0.155485361 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:05:44 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:05:44 localhost dnsmasq[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/addn_hosts - 1 addresses Dec 2 05:05:44 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/host Dec 2 05:05:44 localhost podman[313128]: 2025-12-02 10:05:44.247805876 +0000 UTC m=+0.058466035 container kill 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:44 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/opts Dec 2 05:05:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:44.371 263406 INFO neutron.agent.dhcp.agent [None req-c9803045-1c9a-4e59-9be1-e43b7f00297c - - - - - -] DHCP configuration for ports {'70e5bec2-545f-4237-982e-f54d4963b727'} is completed#033[00m Dec 2 05:05:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:44.542 263406 INFO neutron.agent.dhcp.agent [None req-b3773605-e597-4540-97c5-f9889f69d6be - - - - - -] DHCP configuration for ports {'8544d541-fb07-4eb3-912b-88ac82079cb6'} is completed#033[00m Dec 2 05:05:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:44.555 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:42Z, description=, device_id=0221007c-3d7c-420b-901d-7f4f12bcb06b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8544d541-fb07-4eb3-912b-88ac82079cb6, ip_allocation=immediate, mac_address=fa:16:3e:e4:27:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:38Z, description=, dns_domain=, id=dc1b6fff-63f9-4fbd-b22d-9d87141c4454, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1809554588, port_security_enabled=True, project_id=91b4824d03bd43c4aca137037a18bd3d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1956, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=930, status=ACTIVE, subnets=['c6378a19-844e-42c3-ad47-9aacfa4b56e5'], tags=[], tenant_id=91b4824d03bd43c4aca137037a18bd3d, updated_at=2025-12-02T10:05:41Z, vlan_transparent=None, network_id=dc1b6fff-63f9-4fbd-b22d-9d87141c4454, port_security_enabled=False, project_id=91b4824d03bd43c4aca137037a18bd3d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=955, status=DOWN, tags=[], tenant_id=91b4824d03bd43c4aca137037a18bd3d, updated_at=2025-12-02T10:05:42Z on network dc1b6fff-63f9-4fbd-b22d-9d87141c4454#033[00m Dec 2 05:05:44 localhost dnsmasq[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/addn_hosts - 1 addresses Dec 2 05:05:44 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/host Dec 2 05:05:44 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/opts Dec 2 05:05:44 localhost podman[313166]: 2025-12-02 10:05:44.730780727 +0000 UTC m=+0.053045930 container kill 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:45.064 263406 INFO neutron.agent.dhcp.agent [None req-298c58ea-adb1-4908-af51-f6caa1d2725b - - - - - -] DHCP configuration for ports {'8544d541-fb07-4eb3-912b-88ac82079cb6'} is completed#033[00m Dec 2 05:05:45 localhost nova_compute[281854]: 2025-12-02 10:05:45.611 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:45 localhost nova_compute[281854]: 2025-12-02 10:05:45.957 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:46 localhost dnsmasq[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/addn_hosts - 0 addresses Dec 2 05:05:46 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/host Dec 2 05:05:46 localhost podman[313201]: 2025-12-02 10:05:46.079501329 +0000 UTC m=+0.062023210 container kill 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:05:46 localhost dnsmasq-dhcp[313101]: read /var/lib/neutron/dhcp/dc1b6fff-63f9-4fbd-b22d-9d87141c4454/opts Dec 2 05:05:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:46 localhost kernel: device tapf7f7d342-f4 left promiscuous mode Dec 2 05:05:46 localhost nova_compute[281854]: 2025-12-02 10:05:46.246 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:46 localhost ovn_controller[154505]: 2025-12-02T10:05:46Z|00188|binding|INFO|Releasing lport f7f7d342-f447-418a-b17f-543c0a6fb6f4 from this chassis (sb_readonly=0) Dec 2 05:05:46 localhost ovn_controller[154505]: 2025-12-02T10:05:46Z|00189|binding|INFO|Setting lport f7f7d342-f447-418a-b17f-543c0a6fb6f4 down in Southbound Dec 2 05:05:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:46.256 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-dc1b6fff-63f9-4fbd-b22d-9d87141c4454', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc1b6fff-63f9-4fbd-b22d-9d87141c4454', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1403e5f6-3958-4f2a-b5a7-a41f1931563b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7f7d342-f447-418a-b17f-543c0a6fb6f4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:46.258 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f7f7d342-f447-418a-b17f-543c0a6fb6f4 in datapath dc1b6fff-63f9-4fbd-b22d-9d87141c4454 unbound from our chassis#033[00m Dec 2 05:05:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:46.260 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dc1b6fff-63f9-4fbd-b22d-9d87141c4454 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:05:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:46.261 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fa570ebd-b71f-45ed-8770-63d45076fbb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:46 localhost nova_compute[281854]: 2025-12-02 10:05:46.265 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:05:47 localhost podman[313223]: 2025-12-02 10:05:47.439563553 +0000 UTC m=+0.079024425 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:05:47 localhost podman[313223]: 2025-12-02 10:05:47.474147319 +0000 UTC m=+0.113608181 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:47 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:05:47 localhost nova_compute[281854]: 2025-12-02 10:05:47.829 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:48 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e112 e112: 6 total, 6 up, 6 in Dec 2 05:05:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:05:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:05:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:05:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:05:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:05:49 localhost podman[313242]: 2025-12-02 10:05:49.452911696 +0000 UTC m=+0.089398722 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, config_id=edpm, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:05:49 localhost podman[313243]: 2025-12-02 10:05:49.54982648 +0000 UTC m=+0.182887885 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:05:49 localhost podman[313242]: 2025-12-02 10:05:49.567199334 +0000 UTC m=+0.203686330 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:05:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:49.574 263406 INFO neutron.agent.linux.ip_lib [None req-b74ccd4a-60b2-4178-9daf-e28ff3ee92d9 - - - - - -] Device tap5624f1cd-ac cannot be used as it has no MAC address#033[00m Dec 2 05:05:49 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:05:49 localhost podman[313243]: 2025-12-02 10:05:49.584674121 +0000 UTC m=+0.217735476 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:05:49 localhost nova_compute[281854]: 2025-12-02 10:05:49.601 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:05:49 localhost kernel: device tap5624f1cd-ac entered promiscuous mode Dec 2 05:05:49 localhost NetworkManager[5965]: [1764669949.6115] manager: (tap5624f1cd-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Dec 2 05:05:49 localhost nova_compute[281854]: 2025-12-02 10:05:49.610 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost ovn_controller[154505]: 2025-12-02T10:05:49Z|00190|binding|INFO|Claiming lport 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c for this chassis. Dec 2 05:05:49 localhost ovn_controller[154505]: 2025-12-02T10:05:49Z|00191|binding|INFO|5624f1cd-ac01-4dc0-b6cb-827f7161ed5c: Claiming unknown Dec 2 05:05:49 localhost systemd-udevd[313293]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:05:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:49.623 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-38b12dd1-ff52-416e-8f1c-79f301a7bf32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38b12dd1-ff52-416e-8f1c-79f301a7bf32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37e4f8f0e4cd48f5b7b2d1cb4c67377c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b3a7f0-a12b-42c9-87fc-78534c7005fc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5624f1cd-ac01-4dc0-b6cb-827f7161ed5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:49.625 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c in datapath 38b12dd1-ff52-416e-8f1c-79f301a7bf32 bound to our chassis#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:49.628 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port a007a046-bcdb-410b-a209-82b7bb4ffc2a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:49.628 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38b12dd1-ff52-416e-8f1c-79f301a7bf32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:49.629 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[16dbd353-eab1-4f7c-b2e3-98496cbcdc3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost ovn_controller[154505]: 2025-12-02T10:05:49Z|00192|binding|INFO|Setting lport 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c ovn-installed in OVS Dec 2 05:05:49 localhost ovn_controller[154505]: 2025-12-02T10:05:49Z|00193|binding|INFO|Setting lport 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c up in Southbound Dec 2 05:05:49 localhost nova_compute[281854]: 2025-12-02 10:05:49.643 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost journal[230136]: ethtool ioctl error on tap5624f1cd-ac: No such device Dec 2 05:05:49 localhost nova_compute[281854]: 2025-12-02 10:05:49.676 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:49 localhost nova_compute[281854]: 2025-12-02 10:05:49.701 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e113 e113: 6 total, 6 up, 6 in Dec 2 05:05:50 localhost podman[313364]: Dec 2 05:05:50 localhost podman[313364]: 2025-12-02 10:05:50.576920107 +0000 UTC m=+0.095684391 container create e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:50 localhost podman[313364]: 2025-12-02 10:05:50.533064294 +0000 UTC m=+0.051828618 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:05:50 localhost nova_compute[281854]: 2025-12-02 10:05:50.654 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:50 localhost systemd[1]: Started libpod-conmon-e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3.scope. Dec 2 05:05:50 localhost systemd[1]: Started libcrun container. Dec 2 05:05:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a1204c6242efa2669558e42ecdbcf1d8d6dced90449cb010cd2b06bf89c11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:05:50 localhost podman[313364]: 2025-12-02 10:05:50.687832854 +0000 UTC m=+0.206597148 container init e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:05:50 localhost podman[313364]: 2025-12-02 10:05:50.697716068 +0000 UTC m=+0.216480352 container start e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:05:50 localhost dnsmasq[313382]: started, version 2.85 cachesize 150 Dec 2 05:05:50 localhost dnsmasq[313382]: DNS service limited to local subnets Dec 2 05:05:50 localhost dnsmasq[313382]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:05:50 localhost dnsmasq[313382]: warning: no upstream servers configured Dec 2 05:05:50 localhost dnsmasq-dhcp[313382]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:05:50 localhost dnsmasq[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/addn_hosts - 0 addresses Dec 2 05:05:50 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/host Dec 2 05:05:50 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/opts Dec 2 05:05:50 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:50.844 263406 INFO neutron.agent.dhcp.agent [None req-74c10537-b64f-4f0b-aecc-28aedf4a22d8 - - - - - -] DHCP configuration for ports {'3f430f3b-91ce-45f4-adeb-05cf984d7735'} is completed#033[00m Dec 2 05:05:50 localhost nova_compute[281854]: 2025-12-02 10:05:50.959 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:51 localhost nova_compute[281854]: 2025-12-02 10:05:51.493 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:52 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:52.589 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:52Z, description=, device_id=798ad2c1-39c2-42cf-b43f-5f28ae054b5b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a6fc5ad3-55a2-486d-84d2-256a704a8fbe, ip_allocation=immediate, mac_address=fa:16:3e:cb:eb:b2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:45Z, description=, dns_domain=, id=38b12dd1-ff52-416e-8f1c-79f301a7bf32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-683263966-network, port_security_enabled=True, project_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43495, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=976, status=ACTIVE, subnets=['aa72654c-fad9-468d-966a-038289774954'], tags=[], tenant_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, updated_at=2025-12-02T10:05:47Z, vlan_transparent=None, network_id=38b12dd1-ff52-416e-8f1c-79f301a7bf32, port_security_enabled=False, project_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1020, status=DOWN, tags=[], tenant_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, updated_at=2025-12-02T10:05:52Z on network 38b12dd1-ff52-416e-8f1c-79f301a7bf32#033[00m Dec 2 05:05:52 localhost dnsmasq[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/addn_hosts - 1 addresses Dec 2 05:05:52 localhost podman[313400]: 2025-12-02 10:05:52.807283976 +0000 UTC m=+0.062562055 container kill e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:05:52 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/host Dec 2 05:05:52 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/opts Dec 2 05:05:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:53.076 263406 INFO neutron.agent.dhcp.agent [None req-2df80f2f-6d14-491e-99e3-d652b998a29c - - - - - -] DHCP configuration for ports {'a6fc5ad3-55a2-486d-84d2-256a704a8fbe'} is completed#033[00m Dec 2 05:05:53 localhost nova_compute[281854]: 2025-12-02 10:05:53.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:53 localhost nova_compute[281854]: 2025-12-02 10:05:53.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:05:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:54.169 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:52Z, description=, device_id=798ad2c1-39c2-42cf-b43f-5f28ae054b5b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a6fc5ad3-55a2-486d-84d2-256a704a8fbe, ip_allocation=immediate, mac_address=fa:16:3e:cb:eb:b2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:05:45Z, description=, dns_domain=, id=38b12dd1-ff52-416e-8f1c-79f301a7bf32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroup264TestJSON-683263966-network, port_security_enabled=True, project_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43495, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=976, status=ACTIVE, subnets=['aa72654c-fad9-468d-966a-038289774954'], tags=[], tenant_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, updated_at=2025-12-02T10:05:47Z, vlan_transparent=None, network_id=38b12dd1-ff52-416e-8f1c-79f301a7bf32, port_security_enabled=False, project_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1020, status=DOWN, tags=[], tenant_id=37e4f8f0e4cd48f5b7b2d1cb4c67377c, updated_at=2025-12-02T10:05:52Z on network 38b12dd1-ff52-416e-8f1c-79f301a7bf32#033[00m Dec 2 05:05:54 localhost podman[313438]: 2025-12-02 10:05:54.381972053 +0000 UTC m=+0.061066474 container kill e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:05:54 localhost dnsmasq[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/addn_hosts - 1 addresses Dec 2 05:05:54 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/host Dec 2 05:05:54 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/opts Dec 2 05:05:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:54.628 263406 INFO neutron.agent.dhcp.agent [None req-51e3912d-4c17-4599-9679-205d78046919 - - - - - -] DHCP configuration for ports {'a6fc5ad3-55a2-486d-84d2-256a704a8fbe'} is completed#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.830 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.830 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.830 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.911 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.911 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.911 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:05:54 localhost nova_compute[281854]: 2025-12-02 10:05:54.912 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:05:55 localhost dnsmasq[313101]: exiting on receipt of SIGTERM Dec 2 05:05:55 localhost podman[313473]: 2025-12-02 10:05:55.479728831 +0000 UTC m=+0.058291340 container kill 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:05:55 localhost systemd[1]: libpod-3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14.scope: Deactivated successfully. Dec 2 05:05:55 localhost podman[313486]: 2025-12-02 10:05:55.545422049 +0000 UTC m=+0.054915311 container died 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:05:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14-userdata-shm.mount: Deactivated successfully. Dec 2 05:05:55 localhost podman[313486]: 2025-12-02 10:05:55.632076987 +0000 UTC m=+0.141570179 container cleanup 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:05:55 localhost systemd[1]: libpod-conmon-3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14.scope: Deactivated successfully. Dec 2 05:05:55 localhost podman[313493]: 2025-12-02 10:05:55.658571146 +0000 UTC m=+0.155812229 container remove 3eb2b5d7417c639d5cc5cb2a310f614b4eb583a4bb54796756c3ba78eaebdd14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc1b6fff-63f9-4fbd-b22d-9d87141c4454, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.698 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.702 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.712 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.713 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.854 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.855 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.855 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.856 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.856 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:55.925 263406 INFO neutron.agent.dhcp.agent [None req-9d874735-3153-4d0e-b143-6a7609630514 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 e114: 6 total, 6 up, 6 in Dec 2 05:05:55 localhost nova_compute[281854]: 2025-12-02 10:05:55.961 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:05:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:05:56.168 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:05:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:56 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2634255040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.326 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:56 localhost ovn_controller[154505]: 2025-12-02T10:05:56Z|00194|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.419 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.432 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.432 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:05:56 localhost systemd[1]: var-lib-containers-storage-overlay-68b9a5477488579bfa771c21dc4987fd440d0aad6300b05f1bb3cfe33ee68a44-merged.mount: Deactivated successfully. Dec 2 05:05:56 localhost systemd[1]: run-netns-qdhcp\x2ddc1b6fff\x2d63f9\x2d4fbd\x2db22d\x2d9d87141c4454.mount: Deactivated successfully. Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.655 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.657 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11287MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.658 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.658 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.762 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.763 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.764 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:05:56 localhost nova_compute[281854]: 2025-12-02 10:05:56.821 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:05:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:05:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2150290219' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.263 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.271 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.293 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.318 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.319 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:05:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:05:57 localhost podman[313560]: 2025-12-02 10:05:57.428024224 +0000 UTC m=+0.069384008 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible) Dec 2 05:05:57 localhost podman[313560]: 2025-12-02 10:05:57.441217117 +0000 UTC m=+0.082576891 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd) Dec 2 05:05:57 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:05:57 localhost dnsmasq[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/addn_hosts - 0 addresses Dec 2 05:05:57 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/host Dec 2 05:05:57 localhost podman[313595]: 2025-12-02 10:05:57.706808382 +0000 UTC m=+0.046447663 container kill e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:05:57 localhost dnsmasq-dhcp[313382]: read /var/lib/neutron/dhcp/38b12dd1-ff52-416e-8f1c-79f301a7bf32/opts Dec 2 05:05:57 localhost ovn_controller[154505]: 2025-12-02T10:05:57Z|00195|binding|INFO|Releasing lport 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c from this chassis (sb_readonly=0) Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.872 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:57 localhost ovn_controller[154505]: 2025-12-02T10:05:57Z|00196|binding|INFO|Setting lport 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c down in Southbound Dec 2 05:05:57 localhost kernel: device tap5624f1cd-ac left promiscuous mode Dec 2 05:05:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:57.881 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-38b12dd1-ff52-416e-8f1c-79f301a7bf32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38b12dd1-ff52-416e-8f1c-79f301a7bf32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37e4f8f0e4cd48f5b7b2d1cb4c67377c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=29b3a7f0-a12b-42c9-87fc-78534c7005fc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5624f1cd-ac01-4dc0-b6cb-827f7161ed5c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:05:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:57.884 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5624f1cd-ac01-4dc0-b6cb-827f7161ed5c in datapath 38b12dd1-ff52-416e-8f1c-79f301a7bf32 unbound from our chassis#033[00m Dec 2 05:05:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:57.887 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38b12dd1-ff52-416e-8f1c-79f301a7bf32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:05:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:05:57.888 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2cef9cff-93b2-4ca4-9f04-4215348208ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:05:57 localhost nova_compute[281854]: 2025-12-02 10:05:57.898 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:05:58 localhost nova_compute[281854]: 2025-12-02 10:05:58.315 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:05:59 localhost nova_compute[281854]: 2025-12-02 10:05:59.840 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:00 localhost nova_compute[281854]: 2025-12-02 10:06:00.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:00 localhost nova_compute[281854]: 2025-12-02 10:06:00.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:00 localhost ovn_controller[154505]: 2025-12-02T10:06:00Z|00197|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:06:00 localhost nova_compute[281854]: 2025-12-02 10:06:00.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:00 localhost nova_compute[281854]: 2025-12-02 10:06:00.970 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:01 localhost dnsmasq[313382]: exiting on receipt of SIGTERM Dec 2 05:06:01 localhost podman[313634]: 2025-12-02 10:06:01.388735593 +0000 UTC m=+0.063133090 container kill e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:06:01 localhost systemd[1]: libpod-e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3.scope: Deactivated successfully. Dec 2 05:06:01 localhost podman[313647]: 2025-12-02 10:06:01.459204538 +0000 UTC m=+0.056688237 container died e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:06:01 localhost systemd[1]: tmp-crun.iYeL04.mount: Deactivated successfully. Dec 2 05:06:01 localhost podman[313647]: 2025-12-02 10:06:01.497003879 +0000 UTC m=+0.094487518 container cleanup e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:06:01 localhost systemd[1]: libpod-conmon-e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3.scope: Deactivated successfully. Dec 2 05:06:01 localhost podman[313650]: 2025-12-02 10:06:01.545755614 +0000 UTC m=+0.131753646 container remove e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-38b12dd1-ff52-416e-8f1c-79f301a7bf32, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:06:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:01.569 263406 INFO neutron.agent.dhcp.agent [None req-b4145658-4bbe-44ca-ba78-ea4d0c003600 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:01.641 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:01 localhost nova_compute[281854]: 2025-12-02 10:06:01.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:01 localhost nova_compute[281854]: 2025-12-02 10:06:01.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:06:02 localhost systemd[1]: var-lib-containers-storage-overlay-596a1204c6242efa2669558e42ecdbcf1d8d6dced90449cb010cd2b06bf89c11-merged.mount: Deactivated successfully. Dec 2 05:06:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e43830afe28deb1bd8b8fcd14551789f91413e767d6670fc3fe3c8529f743cf3-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:02 localhost systemd[1]: run-netns-qdhcp\x2d38b12dd1\x2dff52\x2d416e\x2d8f1c\x2d79f301a7bf32.mount: Deactivated successfully. Dec 2 05:06:02 localhost systemd[1]: tmp-crun.Kubx4N.mount: Deactivated successfully. Dec 2 05:06:02 localhost podman[313677]: 2025-12-02 10:06:02.449926163 +0000 UTC m=+0.091113378 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:06:02 localhost podman[313678]: 2025-12-02 10:06:02.501941045 +0000 UTC m=+0.140180001 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:02 localhost podman[313677]: 2025-12-02 10:06:02.513150855 +0000 UTC m=+0.154338060 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:06:02 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:06:02 localhost podman[313678]: 2025-12-02 10:06:02.568003413 +0000 UTC m=+0.206242319 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:06:02 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:06:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:03.050 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:03.050 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:03.051 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:04 localhost openstack_network_exporter[242845]: ERROR 10:06:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:04 localhost openstack_network_exporter[242845]: ERROR 10:06:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:04 localhost openstack_network_exporter[242845]: ERROR 10:06:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:06:04 localhost openstack_network_exporter[242845]: ERROR 10:06:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:06:04 localhost openstack_network_exporter[242845]: Dec 2 05:06:04 localhost openstack_network_exporter[242845]: ERROR 10:06:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:06:04 localhost openstack_network_exporter[242845]: Dec 2 05:06:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:05.630 2 INFO neutron.agent.securitygroups_rpc [None req-616a401d-f858-48e0-bbb1-73e58fa51cbe 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']#033[00m Dec 2 05:06:05 localhost nova_compute[281854]: 2025-12-02 10:06:05.757 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:05.895 2 INFO neutron.agent.securitygroups_rpc [None req-7c216765-b201-4648-8f14-301becf47f8c 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']#033[00m Dec 2 05:06:05 localhost nova_compute[281854]: 2025-12-02 10:06:05.965 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:06 localhost podman[240799]: time="2025-12-02T10:06:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:06:06 localhost podman[240799]: @ - - [02/Dec/2025:10:06:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:06:06 localhost podman[240799]: @ - - [02/Dec/2025:10:06:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18763 "" "Go-http-client/1.1" Dec 2 05:06:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:06.718 2 INFO neutron.agent.securitygroups_rpc [None req-008f26a2-2a9a-4275-8fd3-0db0ae3965dc 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:06.888 2 INFO neutron.agent.securitygroups_rpc [None req-2ee1f36b-3e28-45da-9995-f4334e4d09c3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:07.147 2 INFO neutron.agent.securitygroups_rpc [None req-07b24d70-b40e-4b6b-a4d7-126ffe953c74 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:07.300 2 INFO neutron.agent.securitygroups_rpc [None req-44887692-98ed-4bee-8196-cf2b44c61f3b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:07.453 2 INFO neutron.agent.securitygroups_rpc [None req-b501e38f-a705-4a3f-a758-0a1e958e6279 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:07.593 2 INFO neutron.agent.securitygroups_rpc [None req-7ad226a4-da1c-44ea-8953-97466b6b7a50 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:07.947 2 INFO neutron.agent.securitygroups_rpc [None req-45db658a-901d-430c-aa11-8109c9f781eb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:08.149 2 INFO neutron.agent.securitygroups_rpc [None req-525377d0-23a2-43bc-9048-c1bbf6915b2f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:08.365 2 INFO neutron.agent.securitygroups_rpc [None req-09e30383-7aff-4fc0-a180-6509153c799d 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:08.411 263406 INFO neutron.agent.linux.ip_lib [None req-42dd99b0-37bd-408c-958e-fcfef47e6cca - - - - - -] Device tap2dbcd8ec-20 cannot be used as it has no MAC address#033[00m Dec 2 05:06:08 localhost nova_compute[281854]: 2025-12-02 10:06:08.435 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:08 localhost kernel: device tap2dbcd8ec-20 entered promiscuous mode Dec 2 05:06:08 localhost NetworkManager[5965]: [1764669968.4440] manager: (tap2dbcd8ec-20): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Dec 2 05:06:08 localhost nova_compute[281854]: 2025-12-02 10:06:08.444 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:08 localhost ovn_controller[154505]: 2025-12-02T10:06:08Z|00198|binding|INFO|Claiming lport 2dbcd8ec-20c4-46b0-aa36-003343647b6f for this chassis. Dec 2 05:06:08 localhost ovn_controller[154505]: 2025-12-02T10:06:08Z|00199|binding|INFO|2dbcd8ec-20c4-46b0-aa36-003343647b6f: Claiming unknown Dec 2 05:06:08 localhost systemd-udevd[313732]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:08.458 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-bac36584-1981-4cdd-a84e-a0acd6701163', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bac36584-1981-4cdd-a84e-a0acd6701163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1273a829a21431083b7acd4fe017c0f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88083dc8-f50d-46ad-8d21-9b887e2c23b6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2dbcd8ec-20c4-46b0-aa36-003343647b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:08.460 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 2dbcd8ec-20c4-46b0-aa36-003343647b6f in datapath bac36584-1981-4cdd-a84e-a0acd6701163 bound to our chassis#033[00m Dec 2 05:06:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:08.462 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4ace7270-8d40-4f6c-9551-b4408b1d4a48 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:06:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:08.463 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bac36584-1981-4cdd-a84e-a0acd6701163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:06:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:08.464 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[460ce80c-ecd8-445e-b99f-d2a2792cf326]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:08 localhost ovn_controller[154505]: 2025-12-02T10:06:08Z|00200|binding|INFO|Setting lport 2dbcd8ec-20c4-46b0-aa36-003343647b6f ovn-installed in OVS Dec 2 05:06:08 localhost ovn_controller[154505]: 2025-12-02T10:06:08Z|00201|binding|INFO|Setting lport 2dbcd8ec-20c4-46b0-aa36-003343647b6f up in Southbound Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost nova_compute[281854]: 2025-12-02 10:06:08.477 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost journal[230136]: ethtool ioctl error on tap2dbcd8ec-20: No such device Dec 2 05:06:08 localhost nova_compute[281854]: 2025-12-02 10:06:08.515 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:08 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:08.541 2 INFO neutron.agent.securitygroups_rpc [None req-0d565c71-9045-4969-b270-4f682b986cb3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']#033[00m Dec 2 05:06:08 localhost nova_compute[281854]: 2025-12-02 10:06:08.543 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:09 localhost nova_compute[281854]: 2025-12-02 10:06:09.130 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:09 localhost podman[313803]: Dec 2 05:06:09 localhost podman[313803]: 2025-12-02 10:06:09.467451672 +0000 UTC m=+0.095535728 container create 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:09 localhost systemd[1]: Started libpod-conmon-745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1.scope. Dec 2 05:06:09 localhost systemd[1]: tmp-crun.hAVMYS.mount: Deactivated successfully. Dec 2 05:06:09 localhost podman[313803]: 2025-12-02 10:06:09.420713961 +0000 UTC m=+0.048798007 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:09 localhost systemd[1]: Started libcrun container. Dec 2 05:06:09 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d72d12a9f6e625de9b3e59f6b237633fb3ec1fad9bdbceba45fc659a116379/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:09 localhost podman[313803]: 2025-12-02 10:06:09.550307688 +0000 UTC m=+0.178391664 container init 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:06:09 localhost podman[313803]: 2025-12-02 10:06:09.564723104 +0000 UTC m=+0.192807080 container start 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:09 localhost dnsmasq[313821]: started, version 2.85 cachesize 150 Dec 2 05:06:09 localhost dnsmasq[313821]: DNS service limited to local subnets Dec 2 05:06:09 localhost dnsmasq[313821]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:09 localhost dnsmasq[313821]: warning: no upstream servers configured Dec 2 05:06:09 localhost dnsmasq-dhcp[313821]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:06:09 localhost dnsmasq[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/addn_hosts - 0 addresses Dec 2 05:06:09 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/host Dec 2 05:06:09 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/opts Dec 2 05:06:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:09.682 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:06:09Z, description=, device_id=60f2c6f6-f230-49f9-b983-bd94d1e33602, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=28ecbd52-e8a2-4b21-bab1-ec146a5b1ee7, ip_allocation=immediate, mac_address=fa:16:3e:9c:d8:b4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:06:06Z, description=, dns_domain=, id=bac36584-1981-4cdd-a84e-a0acd6701163, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1838791269-network, port_security_enabled=True, project_id=b1273a829a21431083b7acd4fe017c0f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53975, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1088, status=ACTIVE, subnets=['1e0a4610-062f-4e3c-a349-111b09a25932'], tags=[], tenant_id=b1273a829a21431083b7acd4fe017c0f, updated_at=2025-12-02T10:06:06Z, vlan_transparent=None, network_id=bac36584-1981-4cdd-a84e-a0acd6701163, port_security_enabled=False, project_id=b1273a829a21431083b7acd4fe017c0f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1122, status=DOWN, tags=[], tenant_id=b1273a829a21431083b7acd4fe017c0f, updated_at=2025-12-02T10:06:09Z on network bac36584-1981-4cdd-a84e-a0acd6701163#033[00m Dec 2 05:06:09 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:09.731 2 INFO neutron.agent.securitygroups_rpc [None req-61a0e9aa-9477-43cf-af07-616f49d4b972 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['94ceebea-e233-4f36-9a23-49456abf3258']#033[00m Dec 2 05:06:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:09.775 263406 INFO neutron.agent.dhcp.agent [None req-846f888e-dfef-4c03-997c-92e8af513eed - - - - - -] DHCP configuration for ports {'8d7aa2a9-6ad1-4777-a763-cbbab2495c79'} is completed#033[00m Dec 2 05:06:09 localhost dnsmasq[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/addn_hosts - 1 addresses Dec 2 05:06:09 localhost podman[313838]: 2025-12-02 10:06:09.912383014 +0000 UTC m=+0.062365559 container kill 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:06:09 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/host Dec 2 05:06:09 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/opts Dec 2 05:06:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:10.156 263406 INFO neutron.agent.dhcp.agent [None req-19b054e6-841b-49ab-b832-241cf2e7ed09 - - - - - -] DHCP configuration for ports {'28ecbd52-e8a2-4b21-bab1-ec146a5b1ee7'} is completed#033[00m Dec 2 05:06:10 localhost nova_compute[281854]: 2025-12-02 10:06:10.792 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:10.822 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:06:09Z, description=, device_id=60f2c6f6-f230-49f9-b983-bd94d1e33602, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=28ecbd52-e8a2-4b21-bab1-ec146a5b1ee7, ip_allocation=immediate, mac_address=fa:16:3e:9c:d8:b4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:06:06Z, description=, dns_domain=, id=bac36584-1981-4cdd-a84e-a0acd6701163, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1838791269-network, port_security_enabled=True, project_id=b1273a829a21431083b7acd4fe017c0f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53975, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1088, status=ACTIVE, subnets=['1e0a4610-062f-4e3c-a349-111b09a25932'], tags=[], tenant_id=b1273a829a21431083b7acd4fe017c0f, updated_at=2025-12-02T10:06:06Z, vlan_transparent=None, network_id=bac36584-1981-4cdd-a84e-a0acd6701163, port_security_enabled=False, project_id=b1273a829a21431083b7acd4fe017c0f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1122, status=DOWN, tags=[], tenant_id=b1273a829a21431083b7acd4fe017c0f, updated_at=2025-12-02T10:06:09Z on network bac36584-1981-4cdd-a84e-a0acd6701163#033[00m Dec 2 05:06:10 localhost nova_compute[281854]: 2025-12-02 10:06:10.967 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:11 localhost dnsmasq[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/addn_hosts - 1 addresses Dec 2 05:06:11 localhost podman[313875]: 2025-12-02 10:06:11.09600255 +0000 UTC m=+0.061561588 container kill 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:06:11 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/host Dec 2 05:06:11 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/opts Dec 2 05:06:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:11.324 263406 INFO neutron.agent.dhcp.agent [None req-7e16ee11-e81d-4f19-baf8-a8ed7b0089cd - - - - - -] DHCP configuration for ports {'28ecbd52-e8a2-4b21-bab1-ec146a5b1ee7'} is completed#033[00m Dec 2 05:06:11 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:11.483 2 INFO neutron.agent.securitygroups_rpc [None req-a112143f-1afa-4cab-a314-c7a0cf01690b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']#033[00m Dec 2 05:06:11 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:11.736 2 INFO neutron.agent.securitygroups_rpc [None req-2651c95b-dd09-4d1c-945d-8112466f351e 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']#033[00m Dec 2 05:06:12 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:12.928 2 INFO neutron.agent.securitygroups_rpc [None req-b3b89eff-a637-4d13-a86c-dcfc347ff722 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']#033[00m Dec 2 05:06:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:13.105 2 INFO neutron.agent.securitygroups_rpc [None req-19b017d9-1c5d-463c-8fd1-6e01ac942ab6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']#033[00m Dec 2 05:06:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:13.582 2 INFO neutron.agent.securitygroups_rpc [None req-fdfabc3e-7f87-43e4-a533-8789992c1455 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:13.951 2 INFO neutron.agent.securitygroups_rpc [None req-64456867-c89f-4fe7-8d63-40c9e8a08ded e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:14.035 2 INFO neutron.agent.securitygroups_rpc [None req-5a1a208f-7619-41f5-8ff5-57ae95d40ae9 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:14.235 2 INFO neutron.agent.securitygroups_rpc [None req-ba5eb572-7899-4268-a4d4-30f2b4a8108a 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:06:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:14.414 2 INFO neutron.agent.securitygroups_rpc [None req-9ba9d0cb-ece7-40eb-b408-fce3b926db2b e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:14 localhost podman[313896]: 2025-12-02 10:06:14.440084833 +0000 UTC m=+0.082869778 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:06:14 localhost podman[313896]: 2025-12-02 10:06:14.450043769 +0000 UTC m=+0.092828744 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:14.449 2 INFO neutron.agent.securitygroups_rpc [None req-1dee4ff0-8cdb-4950-b86d-3d1f17272691 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:14 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:06:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:14.868 2 INFO neutron.agent.securitygroups_rpc [None req-c68f94c3-1ecf-4886-9e93-85ed57a0441f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:15 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:15.566 2 INFO neutron.agent.securitygroups_rpc [None req-301242b3-4b9d-48c8-8e81-d6b06b4fcc41 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:15 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:15.600 2 INFO neutron.agent.securitygroups_rpc [None req-296124d6-ee7a-42c7-8fb1-fa352c7e4ccb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']#033[00m Dec 2 05:06:15 localhost nova_compute[281854]: 2025-12-02 10:06:15.827 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:15 localhost nova_compute[281854]: 2025-12-02 10:06:15.968 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.105 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.106 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.106 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.111 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8dc434e-dcce-493a-9b3e-8f9e1cf5db21', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.107035', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a3966da-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '664de14218e71f36c1887f2fec06607b374cddc1f08c57651a5be65ffd8c6bdf'}]}, 'timestamp': '2025-12-02 10:06:16.111983', '_unique_id': '5604fd87106545059086d741f070e4e3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.113 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.114 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.115 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '65f30223-114e-4b8c-bf76-4b72ff7a71e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.115152', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a39fb2c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '0e34c1ac79cb05a330480108957e65990f9878479a69349b33aad1a96bc3d75c'}]}, 'timestamp': '2025-12-02 10:06:16.115771', '_unique_id': '9a1aac1d00914c26ab5be42f3f4896ff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.117 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.143 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.143 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '29f89972-430c-465b-bb06-864e69a77e9d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.118173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a3e4876-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '79377979492dbf96d3e85ff1df8c13c55105b72a2a5a2ebe43ee42f78cbea2fb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.118173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a3e5e74-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': 'c7411a9835948f13b04cc4826ce4d4bc2f0e143f28b4c1a89c63cb7a64c7086d'}]}, 'timestamp': '2025-12-02 10:06:16.144462', '_unique_id': '3e56c7d4abea43dc9ee89426e50d63cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.145 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.146 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.147 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.147 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6505a724-4c3c-466d-aec0-f1ac792b1893', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.147100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a3ed926-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': 'e2d93d6d1905d63d823ce9b697c93ae0cc4cef7c099371b2248b8b9f51bd9e00'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.147100', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a3eeb6e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': 'f07218578cb1ba24e63603e052ff81f859d8ad3158d06c5e83c5439a6b5f2b26'}]}, 'timestamp': '2025-12-02 10:06:16.148097', '_unique_id': 'cfa238b391574207995cc2081afe25b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.150 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1a97c9a-2894-42ad-a529-7ee8ab47281b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.150419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a3f5b62-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': 'e2b8f43a3e7c173e5cfa91a3735c83bae59fd85e5b8e1c666ec5fbaab7a85bbf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.150419', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a3f6bb6-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '7112aca97c9d816bfed63bc4be5825514961d830ab20c50ccd4c419b1f5b49ed'}]}, 'timestamp': '2025-12-02 10:06:16.151338', '_unique_id': '4064233348724d86a5aa226636db0ef8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.153 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.153 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '40f95c25-b220-43e3-aaa1-e992c85e8413', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.153676', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a3fd9ca-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '4a54e05ded664a8e4f27897b1d395779dc31aa7d5a5e1f905044afd4ecff7427'}]}, 'timestamp': '2025-12-02 10:06:16.154188', '_unique_id': '6c038ab2af084d249f75c1b4422f2286'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.155 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.156 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.156 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3fb39997-72ea-4edd-a990-4ba89b04bf96', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.156451', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a404590-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '117dcb3def9957b68dc9bf4990927b9fb489d9312904d2195c439434c01688b0'}]}, 'timestamp': '2025-12-02 10:06:16.156975', '_unique_id': '82da09e24c08429caad55ad1df77b9b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.159 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '30b864c1-06dd-436e-b954-94609ba1508c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.159200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a4224d2-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': '4e5e88c015ac8a18bf7806808223aa976f1363f69ea8727554f96cc95ffc098d'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.159200', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a423922-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': '934b743e56e6e4d2391f2bb9143dc4eef947dee8466d3a23fe8aac48144badc2'}]}, 'timestamp': '2025-12-02 10:06:16.169751', '_unique_id': 'baf396672f6343caba9e093f9a631706'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.171 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.172 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 17080000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b074786b-c75f-4ce1-afbc-cb0286ba37be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 17080000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:06:16.172510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '8a448434-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.403117728, 'message_signature': '4bf19b91e1f1c06d935a05aec2ca3da04279e1e66aaa7327ca766b72a01008cf'}]}, 'timestamp': '2025-12-02 10:06:16.184840', '_unique_id': '370fa08b0cae43798a6159c762e8cf66'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '02c2a0b0-7464-481e-9ba2-e3a30db041b4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:06:16.187424', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '8a44ff7c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.403117728, 'message_signature': 'd022cc7ca3350c8a6b119663e584eab82b98f09c33a0d49d1c2ebac06e770bfd'}]}, 'timestamp': '2025-12-02 10:06:16.187907', '_unique_id': '14770a9766dc456c91f5c545b1e952e0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.190 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '67357ef1-f4c8-405c-81c1-30a7681f4f5d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.190669', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a458014-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '14f2fcb25114c5171464d96940286ab29551426d5f7b18d12b1163d5bcfb4f29'}]}, 'timestamp': '2025-12-02 10:06:16.191256', '_unique_id': 'e5080024210145b888dd549ae58ea09d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.192 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.193 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.193 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6dd7fd34-e0f4-4b3a-b248-c4a0fbe9fee8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.193426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a45e9b4-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '3d94821860d3a0c60ea9cb3a0e40289793ee0d26de65daff35193d5596daa0fa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.193426', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a45fa26-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '2a92d9afc2c21e86eadd5a71cd59d7783012ff1b7612e96fe88d2100906d5e7f'}]}, 'timestamp': '2025-12-02 10:06:16.194317', '_unique_id': '80d32e33f83140b4b46e12fbdfd2e06f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.195 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.196 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a891641-8d36-44e3-9f47-0402e26f35cb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.196497', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a4661dc-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': 'e332853b3002f297b7e8681f6c8858e7aa72398a7c009061f265db146c4de062'}]}, 'timestamp': '2025-12-02 10:06:16.196990', '_unique_id': '5c94736b037f4328b44e366e04c1c3b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.198 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'deff5126-8e96-4fc5-a953-d5fbaa84acec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.199980', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a46e8f0-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': 'cb64a0ae6cf8c726f29bb6e109a3405d73f37dd8a5057629bb8a6ac4755bdd0b'}]}, 'timestamp': '2025-12-02 10:06:16.200450', '_unique_id': '2d7fe07ebf824183836f49b7b53f7c6c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.202 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a080cdb-e0b7-46ed-852c-34c54181f773', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.202751', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a4754c0-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': 'fcb44c45d97b3450f2fa9fe8cfdda4e1d08d70cdfb6c0fd3387c68042099782e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.202751', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a4764f6-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': '9a55757f4096465058c8f38b4e31ec4dfcc1f78a2a3bfc9245962a9f0e014c9e'}]}, 'timestamp': '2025-12-02 10:06:16.203644', '_unique_id': '6e262f24d7a94f799c3bc6460a744473'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.205 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.205 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06f6b79c-9eb5-474a-b55a-21774a403621', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.205810', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a47cc20-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '782b380354e73a905fbe62185942ea6147024c812457c08dd974292a9a4fa769'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.205810', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a47dcf6-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': 'ffe98361e5d1a936e05aa33138eac0e66e685e37fe1951ba26e99bd5b9c45f53'}]}, 'timestamp': '2025-12-02 10:06:16.206719', '_unique_id': '04f442ff0c5e45718c7e4de814141bf1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03dca642-ab21-4337-853f-65362af7671d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.209153', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a484f1a-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '4821f8daceab2eb95f7d07c04fba5b45368fb86df502e9b09f409c8b5b652714'}]}, 'timestamp': '2025-12-02 10:06:16.209655', '_unique_id': '2b8df72cc37843f9842d20b59e231d98'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.211 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.212 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1ed1f541-d53b-4eda-a574-b373a5ea2d5e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.212143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a48c36e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': 'ca6e337537a1e876c6e2c3491e3206df475808d08f0edd3bfbc9d9351bb4f924'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.212143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a48d520-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.378301184, 'message_signature': '9aa776524eeb2727da4e5e8d975978b77a58d10afdd1848e82fffd5e33ed5b82'}]}, 'timestamp': '2025-12-02 10:06:16.213021', '_unique_id': '07c3536820e349d280ad32f22cb366b7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.214 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bca26634-4ec8-4e06-9f9a-58601f831bf6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:06:16.214941', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8a492d90-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '022d90360ebb026d833f0a9c8400dad83beb9080b0863dbf6fe5ce2b4cca2078'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:06:16.214941', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8a49374a-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.337273047, 'message_signature': '075af2b68cc51ba493f5066cd1f7e01cdc42d4d6c9fe388b159c13f5a3253276'}]}, 'timestamp': '2025-12-02 10:06:16.215455', '_unique_id': '2c31d7544b35409daaecbc7fb6482ad1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.216 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d7b858b-7979-4389-b319-5eb627089e52', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.216779', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a49757a-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': '58822057fdddc3d3eddf908515fa11526ae1fa9374b4d115bb796b1471e5844a'}]}, 'timestamp': '2025-12-02 10:06:16.217064', '_unique_id': '84c93caf711d459ca1dfe9aa594da345'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.218 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '291f535a-d0a2-4702-b6a3-92f67478513d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:06:16.218352', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '8a49b2ec-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12138.326113138, 'message_signature': 'add63f74528b0413dc717d779895a1a4d48bc8551318cd3785f529bf8f17b090'}]}, 'timestamp': '2025-12-02 10:06:16.218665', '_unique_id': '1053af5e96994cc381117442e3870bdd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:06:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:06:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:06:17 localhost dnsmasq[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/addn_hosts - 0 addresses Dec 2 05:06:17 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/host Dec 2 05:06:17 localhost dnsmasq-dhcp[313821]: read /var/lib/neutron/dhcp/bac36584-1981-4cdd-a84e-a0acd6701163/opts Dec 2 05:06:17 localhost podman[313932]: 2025-12-02 10:06:17.05544866 +0000 UTC m=+0.066166911 container kill 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:17.247 263406 INFO neutron.agent.linux.ip_lib [None req-d41693c2-6454-415e-b4b8-a89e8f4d47b0 - - - - - -] Device tap9452ee28-23 cannot be used as it has no MAC address#033[00m Dec 2 05:06:17 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:17.264 2 INFO neutron.agent.securitygroups_rpc [None req-bb5bb253-d3fa-4182-a08a-6c77d73857f6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['dc8aaaaf-7a11-4a4d-8334-5511e0a6c147']#033[00m Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.273 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost kernel: device tap9452ee28-23 entered promiscuous mode Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.278 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost NetworkManager[5965]: [1764669977.2792] manager: (tap9452ee28-23): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00202|binding|INFO|Claiming lport 9452ee28-2385-4409-9420-aba511c252a5 for this chassis. Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00203|binding|INFO|9452ee28-2385-4409-9420-aba511c252a5: Claiming unknown Dec 2 05:06:17 localhost systemd-udevd[313963]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.286 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-45b393b4-6935-41d7-9b33-e0a50bae89a0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b393b4-6935-41d7-9b33-e0a50bae89a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1555f2d-4a9f-4453-a712-6d4c971353c9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9452ee28-2385-4409-9420-aba511c252a5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.287 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9452ee28-2385-4409-9420-aba511c252a5 in datapath 45b393b4-6935-41d7-9b33-e0a50bae89a0 bound to our chassis#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.290 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 144a91f0-4a98-4ccb-bad3-9780bb2aa0f5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.290 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45b393b4-6935-41d7-9b33-e0a50bae89a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.291 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3329a6-cc03-4f4d-a456-97d89cc8d5b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00204|binding|INFO|Setting lport 9452ee28-2385-4409-9420-aba511c252a5 ovn-installed in OVS Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00205|binding|INFO|Setting lport 9452ee28-2385-4409-9420-aba511c252a5 up in Southbound Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.334 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00206|binding|INFO|Releasing lport 2dbcd8ec-20c4-46b0-aa36-003343647b6f from this chassis (sb_readonly=0) Dec 2 05:06:17 localhost ovn_controller[154505]: 2025-12-02T10:06:17Z|00207|binding|INFO|Setting lport 2dbcd8ec-20c4-46b0-aa36-003343647b6f down in Southbound Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.343 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.351 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-bac36584-1981-4cdd-a84e-a0acd6701163', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bac36584-1981-4cdd-a84e-a0acd6701163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1273a829a21431083b7acd4fe017c0f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88083dc8-f50d-46ad-8d21-9b887e2c23b6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2dbcd8ec-20c4-46b0-aa36-003343647b6f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:17 localhost kernel: device tap2dbcd8ec-20 left promiscuous mode Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.353 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.354 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 2dbcd8ec-20c4-46b0-aa36-003343647b6f in datapath bac36584-1981-4cdd-a84e-a0acd6701163 unbound from our chassis#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.356 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bac36584-1981-4cdd-a84e-a0acd6701163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:06:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:17.357 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[e660699b-d4ff-4321-bf9c-216723c70bbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.366 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.371 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost nova_compute[281854]: 2025-12-02 10:06:17.389 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:17 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:17.419 2 INFO neutron.agent.securitygroups_rpc [None req-c73fb2ed-db0d-4633-bf8b-3646a66fbf65 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']#033[00m Dec 2 05:06:18 localhost podman[314019]: Dec 2 05:06:18 localhost podman[314019]: 2025-12-02 10:06:18.218589437 +0000 UTC m=+0.092379272 container create 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:06:18 localhost systemd[1]: Started libpod-conmon-99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818.scope. Dec 2 05:06:18 localhost podman[314019]: 2025-12-02 10:06:18.171703874 +0000 UTC m=+0.045493739 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:18 localhost systemd[1]: Started libcrun container. Dec 2 05:06:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc83d9e759505b4e4211d0f2aa7f5dccd534965c1725514fdb16919fad32d11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:18 localhost podman[314019]: 2025-12-02 10:06:18.358380067 +0000 UTC m=+0.232169892 container init 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:18 localhost podman[314019]: 2025-12-02 10:06:18.37079935 +0000 UTC m=+0.244589175 container start 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:18 localhost dnsmasq[314056]: started, version 2.85 cachesize 150 Dec 2 05:06:18 localhost dnsmasq[314056]: DNS service limited to local subnets Dec 2 05:06:18 localhost dnsmasq[314056]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:18 localhost dnsmasq[314056]: warning: no upstream servers configured Dec 2 05:06:18 localhost dnsmasq-dhcp[314056]: DHCP, static leases only on 10.100.255.240, lease time 1d Dec 2 05:06:18 localhost dnsmasq[314056]: read /var/lib/neutron/dhcp/45b393b4-6935-41d7-9b33-e0a50bae89a0/addn_hosts - 0 addresses Dec 2 05:06:18 localhost dnsmasq-dhcp[314056]: read /var/lib/neutron/dhcp/45b393b4-6935-41d7-9b33-e0a50bae89a0/host Dec 2 05:06:18 localhost dnsmasq-dhcp[314056]: read /var/lib/neutron/dhcp/45b393b4-6935-41d7-9b33-e0a50bae89a0/opts Dec 2 05:06:18 localhost podman[314033]: 2025-12-02 10:06:18.356763964 +0000 UTC m=+0.094204061 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:06:18 localhost podman[314033]: 2025-12-02 10:06:18.444216534 +0000 UTC m=+0.181656621 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:18 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:06:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:18.529 263406 INFO neutron.agent.dhcp.agent [None req-8f1cfe6e-4a40-4fe0-af18-ad87f9287eb7 - - - - - -] DHCP configuration for ports {'d64539fc-1ec9-4bb1-9f63-20bffe6b7020'} is completed#033[00m Dec 2 05:06:19 localhost ovn_controller[154505]: 2025-12-02T10:06:19Z|00208|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:06:19 localhost nova_compute[281854]: 2025-12-02 10:06:19.680 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:20 localhost dnsmasq[313821]: exiting on receipt of SIGTERM Dec 2 05:06:20 localhost podman[314074]: 2025-12-02 10:06:20.122741319 +0000 UTC m=+0.059009970 container kill 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:06:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:06:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:06:20 localhost systemd[1]: libpod-745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1.scope: Deactivated successfully. Dec 2 05:06:20 localhost podman[314094]: 2025-12-02 10:06:20.186579367 +0000 UTC m=+0.036379574 container died 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:06:20 localhost systemd[1]: tmp-crun.OeU36P.mount: Deactivated successfully. Dec 2 05:06:20 localhost systemd[1]: var-lib-containers-storage-overlay-92d72d12a9f6e625de9b3e59f6b237633fb3ec1fad9bdbceba45fc659a116379-merged.mount: Deactivated successfully. Dec 2 05:06:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:20 localhost podman[314094]: 2025-12-02 10:06:20.23640892 +0000 UTC m=+0.086209117 container remove 745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bac36584-1981-4cdd-a84e-a0acd6701163, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:06:20 localhost systemd[1]: libpod-conmon-745bd6ed97c2a0850c3eb5ac0df955d5f986d06ceaa8165ec71dc925a2aa2aa1.scope: Deactivated successfully. Dec 2 05:06:20 localhost systemd[1]: run-netns-qdhcp\x2dbac36584\x2d1981\x2d4cdd\x2da84e\x2da0acd6701163.mount: Deactivated successfully. Dec 2 05:06:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:20.262 263406 INFO neutron.agent.dhcp.agent [None req-3f8b7e1e-35d8-46c4-8281-4a5160107d7f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:20.266 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:20 localhost podman[314095]: 2025-12-02 10:06:20.297433862 +0000 UTC m=+0.145923455 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:06:20 localhost podman[314095]: 2025-12-02 10:06:20.333695122 +0000 UTC m=+0.182184795 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter) Dec 2 05:06:20 localhost podman[314097]: 2025-12-02 10:06:20.347740618 +0000 UTC m=+0.192883080 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:06:20 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:06:20 localhost podman[314097]: 2025-12-02 10:06:20.360024197 +0000 UTC m=+0.205166719 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:06:20 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:06:20 localhost nova_compute[281854]: 2025-12-02 10:06:20.830 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:20 localhost nova_compute[281854]: 2025-12-02 10:06:20.969 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:21 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:06:21 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:06:23 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:06:24 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:24.205 263406 INFO neutron.agent.linux.ip_lib [None req-bfa62cba-e49b-4769-89da-990f37989974 - - - - - -] Device tap2ac07a58-29 cannot be used as it has no MAC address#033[00m Dec 2 05:06:24 localhost nova_compute[281854]: 2025-12-02 10:06:24.231 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:24 localhost kernel: device tap2ac07a58-29 entered promiscuous mode Dec 2 05:06:24 localhost ovn_controller[154505]: 2025-12-02T10:06:24Z|00209|binding|INFO|Claiming lport 2ac07a58-2990-406d-9523-fa045a2131dc for this chassis. Dec 2 05:06:24 localhost NetworkManager[5965]: [1764669984.2417] manager: (tap2ac07a58-29): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Dec 2 05:06:24 localhost nova_compute[281854]: 2025-12-02 10:06:24.241 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:24 localhost ovn_controller[154505]: 2025-12-02T10:06:24Z|00210|binding|INFO|2ac07a58-2990-406d-9523-fa045a2131dc: Claiming unknown Dec 2 05:06:24 localhost systemd-udevd[314251]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost ovn_controller[154505]: 2025-12-02T10:06:24Z|00211|binding|INFO|Setting lport 2ac07a58-2990-406d-9523-fa045a2131dc ovn-installed in OVS Dec 2 05:06:24 localhost ovn_controller[154505]: 2025-12-02T10:06:24Z|00212|binding|INFO|Setting lport 2ac07a58-2990-406d-9523-fa045a2131dc up in Southbound Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:24.326 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-fe22b18d-d633-497b-bdda-39e8c539a772', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe22b18d-d633-497b-bdda-39e8c539a772', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af4132dc-7b0a-4214-baa9-f7da22b4f1e1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2ac07a58-2990-406d-9523-fa045a2131dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:24 localhost nova_compute[281854]: 2025-12-02 10:06:24.325 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:24.327 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac07a58-2990-406d-9523-fa045a2131dc in datapath fe22b18d-d633-497b-bdda-39e8c539a772 bound to our chassis#033[00m Dec 2 05:06:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:24.328 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fe22b18d-d633-497b-bdda-39e8c539a772 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:24.329 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d99fde58-3075-4469-8df5-6b2ffb4ef2db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost nova_compute[281854]: 2025-12-02 10:06:24.345 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost journal[230136]: ethtool ioctl error on tap2ac07a58-29: No such device Dec 2 05:06:24 localhost nova_compute[281854]: 2025-12-02 10:06:24.365 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost podman[314321]: Dec 2 05:06:25 localhost podman[314321]: 2025-12-02 10:06:25.114551314 +0000 UTC m=+0.085077546 container create 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:06:25 localhost systemd[1]: Started libpod-conmon-1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf.scope. Dec 2 05:06:25 localhost podman[314321]: 2025-12-02 10:06:25.072196961 +0000 UTC m=+0.042723233 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:25 localhost ovn_controller[154505]: 2025-12-02T10:06:25Z|00213|binding|INFO|Removing iface tap2ac07a58-29 ovn-installed in OVS Dec 2 05:06:25 localhost ovn_controller[154505]: 2025-12-02T10:06:25Z|00214|binding|INFO|Removing lport 2ac07a58-2990-406d-9523-fa045a2131dc ovn-installed in OVS Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.178 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:25.176 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 001be18f-4e9c-45ce-91c6-140e331db5ae with type ""#033[00m Dec 2 05:06:25 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:25.181 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-fe22b18d-d633-497b-bdda-39e8c539a772', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe22b18d-d633-497b-bdda-39e8c539a772', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af4132dc-7b0a-4214-baa9-f7da22b4f1e1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2ac07a58-2990-406d-9523-fa045a2131dc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.185 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:25.186 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac07a58-2990-406d-9523-fa045a2131dc in datapath fe22b18d-d633-497b-bdda-39e8c539a772 unbound from our chassis#033[00m Dec 2 05:06:25 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:25.188 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fe22b18d-d633-497b-bdda-39e8c539a772 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:25 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:25.189 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[127ce47f-100e-4618-a994-a6c2345ca6ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:25 localhost systemd[1]: Started libcrun container. Dec 2 05:06:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f4087dc34d8c0d48ea5d346f07ce2299945782a2fd8fa7e387242ee05d14e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:25 localhost podman[314321]: 2025-12-02 10:06:25.209352461 +0000 UTC m=+0.179878693 container init 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:06:25 localhost podman[314321]: 2025-12-02 10:06:25.21870075 +0000 UTC m=+0.189226992 container start 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:06:25 localhost dnsmasq[314339]: started, version 2.85 cachesize 150 Dec 2 05:06:25 localhost dnsmasq[314339]: DNS service limited to local subnets Dec 2 05:06:25 localhost dnsmasq[314339]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:25 localhost dnsmasq[314339]: warning: no upstream servers configured Dec 2 05:06:25 localhost dnsmasq-dhcp[314339]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:06:25 localhost dnsmasq[314339]: read /var/lib/neutron/dhcp/fe22b18d-d633-497b-bdda-39e8c539a772/addn_hosts - 0 addresses Dec 2 05:06:25 localhost dnsmasq-dhcp[314339]: read /var/lib/neutron/dhcp/fe22b18d-d633-497b-bdda-39e8c539a772/host Dec 2 05:06:25 localhost dnsmasq-dhcp[314339]: read /var/lib/neutron/dhcp/fe22b18d-d633-497b-bdda-39e8c539a772/opts Dec 2 05:06:25 localhost ovn_controller[154505]: 2025-12-02T10:06:25Z|00215|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.375 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:25.402 263406 INFO neutron.agent.dhcp.agent [None req-9c835e05-0daa-4f97-808f-ddaedf82ed46 - - - - - -] DHCP configuration for ports {'a54abf5d-e342-4c56-9ad6-39e109e70c55'} is completed#033[00m Dec 2 05:06:25 localhost systemd[1]: tmp-crun.zl9R65.mount: Deactivated successfully. Dec 2 05:06:25 localhost dnsmasq[314339]: exiting on receipt of SIGTERM Dec 2 05:06:25 localhost podman[314356]: 2025-12-02 10:06:25.591162425 +0000 UTC m=+0.068581565 container kill 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:06:25 localhost systemd[1]: libpod-1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf.scope: Deactivated successfully. Dec 2 05:06:25 localhost podman[314368]: 2025-12-02 10:06:25.659645228 +0000 UTC m=+0.054059428 container died 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:25 localhost podman[314368]: 2025-12-02 10:06:25.690180454 +0000 UTC m=+0.084594594 container cleanup 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:06:25 localhost systemd[1]: libpod-conmon-1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf.scope: Deactivated successfully. Dec 2 05:06:25 localhost podman[314370]: 2025-12-02 10:06:25.750023885 +0000 UTC m=+0.136493013 container remove 1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe22b18d-d633-497b-bdda-39e8c539a772, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:06:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:06:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2144 writes, 23K keys, 2144 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s#012Cumulative WAL: 2144 writes, 2144 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2144 writes, 23K keys, 2144 commit groups, 1.0 writes per commit group, ingest: 41.34 MB, 0.07 MB/s#012Interval WAL: 2144 writes, 2144 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 113.2 0.26 0.08 9 0.029 0 0 0.0 0.0#012 L6 1/0 15.87 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 4.5 180.7 163.6 0.81 0.33 8 0.101 98K 3974 0.0 0.0#012 Sum 1/0 15.87 MB 0.0 0.1 0.0 0.1 0.2 0.0 0.0 5.5 136.4 151.2 1.07 0.41 17 0.063 98K 3974 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.2 0.0 0.0 5.5 138.9 154.0 1.05 0.41 16 0.066 98K 3974 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 0.0 180.7 163.6 0.81 0.33 8 0.101 98K 3974 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 122.2 0.24 0.08 8 0.030 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.029#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.27 MB/s write, 0.14 GB read, 0.24 MB/s read, 1.1 seconds#012Interval compaction: 0.16 GB write, 0.27 MB/s write, 0.14 GB read, 0.24 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563183c47350#2 capacity: 308.00 MB usage: 29.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.0003 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1959,28.51 MB,9.25684%) FilterBlock(17,302.48 KB,0.0959074%) IndexBlock(17,388.86 KB,0.123294%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.762 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost kernel: device tap2ac07a58-29 left promiscuous mode Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.777 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:25.798 263406 INFO neutron.agent.dhcp.agent [None req-f6b26f45-c4f5-43f1-922f-9dd07a2c45a0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:25.799 263406 INFO neutron.agent.dhcp.agent [None req-f6b26f45-c4f5-43f1-922f-9dd07a2c45a0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.831 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:25 localhost nova_compute[281854]: 2025-12-02 10:06:25.971 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:26 localhost systemd[1]: var-lib-containers-storage-overlay-41f4087dc34d8c0d48ea5d346f07ce2299945782a2fd8fa7e387242ee05d14e4-merged.mount: Deactivated successfully. Dec 2 05:06:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b3f857c124e3f98309a9f8e87b2b956861e7ec8434d2c564a7929dbf05455bf-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:26 localhost systemd[1]: run-netns-qdhcp\x2dfe22b18d\x2dd633\x2d497b\x2dbdda\x2d39e8c539a772.mount: Deactivated successfully. Dec 2 05:06:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:26.457 2 INFO neutron.agent.securitygroups_rpc [None req-55db991b-3b52-42dd-b07a-80ab3fefc470 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']#033[00m Dec 2 05:06:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:26.625 2 INFO neutron.agent.securitygroups_rpc [None req-3231b6f4-a459-453f-8b66-50672725d94d ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']#033[00m Dec 2 05:06:27 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:27.732 2 INFO neutron.agent.securitygroups_rpc [None req-c8b7833a-90ae-4c85-a76b-1cc28b8de3de ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:28.064 2 INFO neutron.agent.securitygroups_rpc [None req-7e69b2ff-a5bb-4d6e-b782-47cf6529c7b2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:28.202 2 INFO neutron.agent.securitygroups_rpc [None req-01786b72-4967-4664-b0a8-c04c5ee043aa ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:06:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:28.411 2 INFO neutron.agent.securitygroups_rpc [None req-660a054c-2d13-4375-bdeb-e4f5c9661010 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost podman[314397]: 2025-12-02 10:06:28.45531884 +0000 UTC m=+0.090932164 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:28 localhost podman[314397]: 2025-12-02 10:06:28.494991291 +0000 UTC m=+0.130604625 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:06:28 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:06:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:28.601 2 INFO neutron.agent.securitygroups_rpc [None req-62dae022-f3c9-4be5-80ba-7d0557eedc03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:28.890 2 INFO neutron.agent.securitygroups_rpc [None req-ec75e4da-10e7-4730-8e6b-71c48e471048 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:29.257 2 INFO neutron.agent.securitygroups_rpc [None req-405af18e-31af-407d-8854-f380b293accc ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:29.526 2 INFO neutron.agent.securitygroups_rpc [None req-f8e3c35c-c137-4121-a943-a4b83494d8a2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:30 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:30.179 2 INFO neutron.agent.securitygroups_rpc [None req-9f2bf60d-db80-42ad-806a-1445118d8a03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:30 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:30.785 2 INFO neutron.agent.securitygroups_rpc [None req-9a1f4b78-6988-4ca6-b0f0-b52a2438af33 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']#033[00m Dec 2 05:06:30 localhost nova_compute[281854]: 2025-12-02 10:06:30.833 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:30 localhost nova_compute[281854]: 2025-12-02 10:06:30.973 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:31 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:31.764 2 INFO neutron.agent.securitygroups_rpc [None req-1c27ab60-bacc-4e9e-b1c9-d6f9cf0e1b32 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['8ebd5526-cfd6-4dd0-8888-3d40098feb1a']#033[00m Dec 2 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:06:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:33.365 2 INFO neutron.agent.securitygroups_rpc [None req-ec9364dd-dd89-44e2-a668-097e6474f1a7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']#033[00m Dec 2 05:06:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:06:33 localhost podman[314415]: 2025-12-02 10:06:33.448407369 +0000 UTC m=+0.073143637 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller) Dec 2 05:06:33 localhost podman[314414]: 2025-12-02 10:06:33.501272794 +0000 UTC m=+0.127542254 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:06:33 localhost podman[314415]: 2025-12-02 10:06:33.518017412 +0000 UTC m=+0.142753620 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:06:33 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:06:33 localhost podman[314414]: 2025-12-02 10:06:33.539167087 +0000 UTC m=+0.165436567 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:06:33 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:06:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:33.625 2 INFO neutron.agent.securitygroups_rpc [None req-37778547-7b0e-4196-bc87-08fdc55b8adf ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']#033[00m Dec 2 05:06:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:33.900 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:33.901 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:06:33 localhost nova_compute[281854]: 2025-12-02 10:06:33.908 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:34 localhost openstack_network_exporter[242845]: ERROR 10:06:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:06:34 localhost openstack_network_exporter[242845]: ERROR 10:06:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:34 localhost openstack_network_exporter[242845]: ERROR 10:06:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:06:34 localhost openstack_network_exporter[242845]: ERROR 10:06:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:06:34 localhost openstack_network_exporter[242845]: Dec 2 05:06:34 localhost openstack_network_exporter[242845]: ERROR 10:06:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:06:34 localhost openstack_network_exporter[242845]: Dec 2 05:06:35 localhost nova_compute[281854]: 2025-12-02 10:06:35.837 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:35 localhost nova_compute[281854]: 2025-12-02 10:06:35.975 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:36 localhost podman[240799]: time="2025-12-02T10:06:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:06:36 localhost podman[240799]: @ - - [02/Dec/2025:10:06:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 2 05:06:36 localhost podman[240799]: @ - - [02/Dec/2025:10:06:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19237 "" "Go-http-client/1.1" Dec 2 05:06:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:36.119 2 INFO neutron.agent.securitygroups_rpc [None req-4d405741-5ff5-4de3-bee3-cffdae397b25 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']#033[00m Dec 2 05:06:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:36.314 2 INFO neutron.agent.securitygroups_rpc [None req-b1eefb6d-0b2e-4576-bbe4-d313eb7d9799 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']#033[00m Dec 2 05:06:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:36.902 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:06:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:36.983 2 INFO neutron.agent.securitygroups_rpc [None req-9e362cac-bb61-4369-ad00-9f073d908c17 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e115 e115: 6 total, 6 up, 6 in Dec 2 05:06:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:37.404 2 INFO neutron.agent.securitygroups_rpc [None req-24128e00-40c9-486f-b5be-6d4fcff90c40 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:37.944 2 INFO neutron.agent.securitygroups_rpc [None req-c72c5dd9-5995-4df5-a6ad-e067a8fdaf10 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:38.315 2 INFO neutron.agent.securitygroups_rpc [None req-1079cc4d-64e8-4687-8a8b-22fa9980bbe7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:38.599 2 INFO neutron.agent.securitygroups_rpc [None req-37425f6d-3678-4ce3-8643-3e657bae2eff ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:38 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:38.920 2 INFO neutron.agent.securitygroups_rpc [None req-5dbcc1aa-f594-4336-8b87-6f8ec002ed0b ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']#033[00m Dec 2 05:06:39 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e116 e116: 6 total, 6 up, 6 in Dec 2 05:06:40 localhost nova_compute[281854]: 2025-12-02 10:06:40.841 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:40 localhost neutron_sriov_agent[256494]: 2025-12-02 10:06:40.868 2 INFO neutron.agent.securitygroups_rpc [None req-bbaaa1af-170e-49f9-ab51-fca299624b09 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['ef31c17a-e7e0-47e3-9c93-83c68ae18a93']#033[00m Dec 2 05:06:40 localhost nova_compute[281854]: 2025-12-02 10:06:40.976 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:42 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e117 e117: 6 total, 6 up, 6 in Dec 2 05:06:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:43.233 263406 INFO neutron.agent.linux.ip_lib [None req-a7dd135d-703c-48ef-ab14-fbb77957ada6 - - - - - -] Device tap9a3b1032-b9 cannot be used as it has no MAC address#033[00m Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost kernel: device tap9a3b1032-b9 entered promiscuous mode Dec 2 05:06:43 localhost NetworkManager[5965]: [1764670003.2643] manager: (tap9a3b1032-b9): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost ovn_controller[154505]: 2025-12-02T10:06:43Z|00216|binding|INFO|Claiming lport 9a3b1032-b966-42f6-bd53-33e478407e4c for this chassis. Dec 2 05:06:43 localhost ovn_controller[154505]: 2025-12-02T10:06:43Z|00217|binding|INFO|9a3b1032-b966-42f6-bd53-33e478407e4c: Claiming unknown Dec 2 05:06:43 localhost systemd-udevd[314473]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:43.279 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0add6e7f-ff15-4bda-be2f-1e656e720929', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0add6e7f-ff15-4bda-be2f-1e656e720929', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd99a7fe-e968-4c3f-96c7-184bb6138ed4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9a3b1032-b966-42f6-bd53-33e478407e4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:43.280 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9a3b1032-b966-42f6-bd53-33e478407e4c in datapath 0add6e7f-ff15-4bda-be2f-1e656e720929 bound to our chassis#033[00m Dec 2 05:06:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:43.281 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0add6e7f-ff15-4bda-be2f-1e656e720929 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:43.282 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29d26c-0e09-4af3-9935-79a405885c23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.295 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost ovn_controller[154505]: 2025-12-02T10:06:43Z|00218|binding|INFO|Setting lport 9a3b1032-b966-42f6-bd53-33e478407e4c ovn-installed in OVS Dec 2 05:06:43 localhost ovn_controller[154505]: 2025-12-02T10:06:43Z|00219|binding|INFO|Setting lport 9a3b1032-b966-42f6-bd53-33e478407e4c up in Southbound Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.301 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.304 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost journal[230136]: ethtool ioctl error on tap9a3b1032-b9: No such device Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.339 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:43 localhost nova_compute[281854]: 2025-12-02 10:06:43.363 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:44 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e118 e118: 6 total, 6 up, 6 in Dec 2 05:06:45 localhost podman[314542]: Dec 2 05:06:45 localhost podman[314542]: 2025-12-02 10:06:45.053012835 +0000 UTC m=+0.075170851 container create 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:06:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:06:45 localhost systemd[1]: Started libpod-conmon-783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba.scope. Dec 2 05:06:45 localhost systemd[1]: tmp-crun.V9Ac3E.mount: Deactivated successfully. Dec 2 05:06:45 localhost podman[314542]: 2025-12-02 10:06:45.006468691 +0000 UTC m=+0.028626717 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:45 localhost systemd[1]: Started libcrun container. Dec 2 05:06:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1be98159917b52c833b727e30d0192370db67304a645d168e02eafe4efdd29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e119 e119: 6 total, 6 up, 6 in Dec 2 05:06:45 localhost podman[314556]: 2025-12-02 10:06:45.18250161 +0000 UTC m=+0.094963561 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:06:45 localhost podman[314542]: 2025-12-02 10:06:45.190754521 +0000 UTC m=+0.212912537 container init 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:06:45 localhost podman[314556]: 2025-12-02 10:06:45.195143718 +0000 UTC m=+0.107605719 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:06:45 localhost dnsmasq[314580]: started, version 2.85 cachesize 150 Dec 2 05:06:45 localhost dnsmasq[314580]: DNS service limited to local subnets Dec 2 05:06:45 localhost dnsmasq[314580]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:45 localhost dnsmasq[314580]: warning: no upstream servers configured Dec 2 05:06:45 localhost dnsmasq-dhcp[314580]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:06:45 localhost dnsmasq[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/addn_hosts - 0 addresses Dec 2 05:06:45 localhost dnsmasq-dhcp[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/host Dec 2 05:06:45 localhost dnsmasq-dhcp[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/opts Dec 2 05:06:45 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:06:45 localhost podman[314542]: 2025-12-02 10:06:45.251704401 +0000 UTC m=+0.273862417 container start 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:06:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:45.474 263406 INFO neutron.agent.dhcp.agent [None req-e150396d-fe2a-47f4-bd57-c8391360238b - - - - - -] DHCP configuration for ports {'0e2e4efc-b473-4e89-b805-dc238578b32c'} is completed#033[00m Dec 2 05:06:45 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 2 05:06:45 localhost ovn_controller[154505]: 2025-12-02T10:06:45Z|00220|binding|INFO|Removing iface tap9a3b1032-b9 ovn-installed in OVS Dec 2 05:06:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:45.858 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port da2c3546-7ee1-4f37-b792-e7fd0de3d136 with type ""#033[00m Dec 2 05:06:45 localhost ovn_controller[154505]: 2025-12-02T10:06:45Z|00221|binding|INFO|Removing lport 9a3b1032-b966-42f6-bd53-33e478407e4c ovn-installed in OVS Dec 2 05:06:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:45.860 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0add6e7f-ff15-4bda-be2f-1e656e720929', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0add6e7f-ff15-4bda-be2f-1e656e720929', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd99a7fe-e968-4c3f-96c7-184bb6138ed4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9a3b1032-b966-42f6-bd53-33e478407e4c) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:45.862 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9a3b1032-b966-42f6-bd53-33e478407e4c in datapath 0add6e7f-ff15-4bda-be2f-1e656e720929 unbound from our chassis#033[00m Dec 2 05:06:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:45.866 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0add6e7f-ff15-4bda-be2f-1e656e720929, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:06:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:45.867 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4ad27c-2092-4399-8f34-4743a7a1eae7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:45 localhost kernel: device tap9a3b1032-b9 left promiscuous mode Dec 2 05:06:45 localhost nova_compute[281854]: 2025-12-02 10:06:45.879 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:45 localhost nova_compute[281854]: 2025-12-02 10:06:45.896 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:45 localhost nova_compute[281854]: 2025-12-02 10:06:45.977 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:46 localhost dnsmasq[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/addn_hosts - 0 addresses Dec 2 05:06:46 localhost dnsmasq-dhcp[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/host Dec 2 05:06:46 localhost dnsmasq-dhcp[314580]: read /var/lib/neutron/dhcp/0add6e7f-ff15-4bda-be2f-1e656e720929/opts Dec 2 05:06:46 localhost podman[314600]: 2025-12-02 10:06:46.46238403 +0000 UTC m=+0.056429421 container kill 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent [None req-fb7cfb80-ee60-4bb4-bfb4-e8433251b8a3 - - - - - -] Unable to reload_allocations dhcp for 0add6e7f-ff15-4bda-be2f-1e656e720929.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9a3b1032-b9 not found in namespace qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929. Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9a3b1032-b9 not found in namespace qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929. Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.485 263406 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.495 263406 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 2 05:06:46 localhost ovn_controller[154505]: 2025-12-02T10:06:46Z|00222|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:06:46 localhost nova_compute[281854]: 2025-12-02 10:06:46.917 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.973 263406 INFO neutron.agent.dhcp.agent [None req-c86fd9c9-8c92-4391-b127-f78692fda812 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.974 263406 INFO neutron.agent.dhcp.agent [-] Starting network 0add6e7f-ff15-4bda-be2f-1e656e720929 dhcp configuration#033[00m Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.974 263406 INFO neutron.agent.dhcp.agent [-] Finished network 0add6e7f-ff15-4bda-be2f-1e656e720929 dhcp configuration#033[00m Dec 2 05:06:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:46.975 263406 INFO neutron.agent.dhcp.agent [None req-c86fd9c9-8c92-4391-b127-f78692fda812 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:06:47 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e120 e120: 6 total, 6 up, 6 in Dec 2 05:06:47 localhost systemd[1]: tmp-crun.Qur7bZ.mount: Deactivated successfully. Dec 2 05:06:47 localhost dnsmasq[314580]: exiting on receipt of SIGTERM Dec 2 05:06:47 localhost podman[314630]: 2025-12-02 10:06:47.201392061 +0000 UTC m=+0.078875661 container kill 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:06:47 localhost systemd[1]: libpod-783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba.scope: Deactivated successfully. Dec 2 05:06:47 localhost podman[314644]: 2025-12-02 10:06:47.273160791 +0000 UTC m=+0.051138359 container died 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:47 localhost systemd[1]: tmp-crun.KNQrpf.mount: Deactivated successfully. Dec 2 05:06:47 localhost podman[314644]: 2025-12-02 10:06:47.389950656 +0000 UTC m=+0.167928174 container remove 783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0add6e7f-ff15-4bda-be2f-1e656e720929, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:06:47 localhost systemd[1]: libpod-conmon-783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba.scope: Deactivated successfully. Dec 2 05:06:47 localhost nova_compute[281854]: 2025-12-02 10:06:47.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:48 localhost systemd[1]: var-lib-containers-storage-overlay-4b1be98159917b52c833b727e30d0192370db67304a645d168e02eafe4efdd29-merged.mount: Deactivated successfully. Dec 2 05:06:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-783a875b8c2c04f3c57936e1c3177f445ef57df7b3da156745996c243a97e3ba-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:48 localhost systemd[1]: run-netns-qdhcp\x2d0add6e7f\x2dff15\x2d4bda\x2dbe2f\x2d1e656e720929.mount: Deactivated successfully. Dec 2 05:06:48 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e121 e121: 6 total, 6 up, 6 in Dec 2 05:06:49 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e122 e122: 6 total, 6 up, 6 in Dec 2 05:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:06:49 localhost podman[314672]: 2025-12-02 10:06:49.442545978 +0000 UTC m=+0.084855701 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 05:06:49 localhost podman[314672]: 2025-12-02 10:06:49.477165425 +0000 UTC m=+0.119475158 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 2 05:06:49 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:06:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e123 e123: 6 total, 6 up, 6 in Dec 2 05:06:50 localhost nova_compute[281854]: 2025-12-02 10:06:50.881 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:50 localhost nova_compute[281854]: 2025-12-02 10:06:50.979 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e124 e124: 6 total, 6 up, 6 in Dec 2 05:06:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:06:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:06:51 localhost podman[314690]: 2025-12-02 10:06:51.437644274 +0000 UTC m=+0.078454560 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:06:51 localhost podman[314690]: 2025-12-02 10:06:51.449877852 +0000 UTC m=+0.090688138 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:06:51 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:06:51 localhost podman[314691]: 2025-12-02 10:06:51.502085848 +0000 UTC m=+0.135979258 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:06:51 localhost podman[314691]: 2025-12-02 10:06:51.534863006 +0000 UTC m=+0.168756436 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:06:51 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:06:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e125 e125: 6 total, 6 up, 6 in Dec 2 05:06:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:53.435 263406 INFO neutron.agent.linux.ip_lib [None req-bd00e9ce-f091-424f-abf7-918223fa7a24 - - - - - -] Device tapa4104ef1-2f cannot be used as it has no MAC address#033[00m Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.495 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost kernel: device tapa4104ef1-2f entered promiscuous mode Dec 2 05:06:53 localhost NetworkManager[5965]: [1764670013.5028] manager: (tapa4104ef1-2f): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.503 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost ovn_controller[154505]: 2025-12-02T10:06:53Z|00223|binding|INFO|Claiming lport a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b for this chassis. Dec 2 05:06:53 localhost ovn_controller[154505]: 2025-12-02T10:06:53Z|00224|binding|INFO|a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b: Claiming unknown Dec 2 05:06:53 localhost systemd-udevd[314742]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:06:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:53.512 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c6404ebe-76b1-43df-87e7-66f2b8167fb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6404ebe-76b1-43df-87e7-66f2b8167fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c964373-75d0-46a8-9c74-4755190151c0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:53.513 160221 INFO neutron.agent.ovn.metadata.agent [-] Port a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b in datapath c6404ebe-76b1-43df-87e7-66f2b8167fb2 bound to our chassis#033[00m Dec 2 05:06:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:53.513 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6404ebe-76b1-43df-87e7-66f2b8167fb2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:53.514 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1b9698-87ce-4f66-bc51-d256188385e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.530 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost ovn_controller[154505]: 2025-12-02T10:06:53Z|00225|binding|INFO|Setting lport a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b ovn-installed in OVS Dec 2 05:06:53 localhost ovn_controller[154505]: 2025-12-02T10:06:53Z|00226|binding|INFO|Setting lport a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b up in Southbound Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.536 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.538 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost journal[230136]: ethtool ioctl error on tapa4104ef1-2f: No such device Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.565 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:53 localhost nova_compute[281854]: 2025-12-02 10:06:53.591 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:54 localhost podman[314813]: Dec 2 05:06:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e126 e126: 6 total, 6 up, 6 in Dec 2 05:06:54 localhost podman[314813]: 2025-12-02 10:06:54.389529696 +0000 UTC m=+0.080242107 container create f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:06:54 localhost systemd[1]: Started libpod-conmon-f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4.scope. Dec 2 05:06:54 localhost systemd[1]: tmp-crun.2mI1Ly.mount: Deactivated successfully. Dec 2 05:06:54 localhost podman[314813]: 2025-12-02 10:06:54.352582728 +0000 UTC m=+0.043295149 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:06:54 localhost systemd[1]: Started libcrun container. Dec 2 05:06:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d598670f9239e3b6596f34e730fcc2feb27320ff94456532e197a96679a822a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:06:54 localhost podman[314813]: 2025-12-02 10:06:54.475445775 +0000 UTC m=+0.166158206 container init f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:54 localhost podman[314813]: 2025-12-02 10:06:54.486049159 +0000 UTC m=+0.176761570 container start f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:06:54 localhost dnsmasq[314832]: started, version 2.85 cachesize 150 Dec 2 05:06:54 localhost dnsmasq[314832]: DNS service limited to local subnets Dec 2 05:06:54 localhost dnsmasq[314832]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:06:54 localhost dnsmasq[314832]: warning: no upstream servers configured Dec 2 05:06:54 localhost dnsmasq-dhcp[314832]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:06:54 localhost dnsmasq[314832]: read /var/lib/neutron/dhcp/c6404ebe-76b1-43df-87e7-66f2b8167fb2/addn_hosts - 0 addresses Dec 2 05:06:54 localhost dnsmasq-dhcp[314832]: read /var/lib/neutron/dhcp/c6404ebe-76b1-43df-87e7-66f2b8167fb2/host Dec 2 05:06:54 localhost dnsmasq-dhcp[314832]: read /var/lib/neutron/dhcp/c6404ebe-76b1-43df-87e7-66f2b8167fb2/opts Dec 2 05:06:54 localhost ovn_controller[154505]: 2025-12-02T10:06:54Z|00227|binding|INFO|Removing iface tapa4104ef1-2f ovn-installed in OVS Dec 2 05:06:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:54.630 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 688cb983-65f3-4a37-8227-7785be0bfe50 with type ""#033[00m Dec 2 05:06:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:54.632 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c6404ebe-76b1-43df-87e7-66f2b8167fb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6404ebe-76b1-43df-87e7-66f2b8167fb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c964373-75d0-46a8-9c74-4755190151c0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:06:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:54.634 160221 INFO neutron.agent.ovn.metadata.agent [-] Port a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b in datapath c6404ebe-76b1-43df-87e7-66f2b8167fb2 unbound from our chassis#033[00m Dec 2 05:06:54 localhost ovn_controller[154505]: 2025-12-02T10:06:54Z|00228|binding|INFO|Removing lport a4104ef1-2fd9-417a-9b3a-16b3d9d44a5b ovn-installed in OVS Dec 2 05:06:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:54.635 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6404ebe-76b1-43df-87e7-66f2b8167fb2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:06:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:06:54.670 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[745a4d76-aeed-4bf9-b5ac-7824fe9c83f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:06:54 localhost nova_compute[281854]: 2025-12-02 10:06:54.671 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:54.673 263406 INFO neutron.agent.dhcp.agent [None req-27eab582-0bb0-44af-abee-65f8b3c16f57 - - - - - -] DHCP configuration for ports {'99741eee-a4cf-401b-bedc-bcdbd3907469'} is completed#033[00m Dec 2 05:06:54 localhost ovn_controller[154505]: 2025-12-02T10:06:54Z|00229|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:06:54 localhost nova_compute[281854]: 2025-12-02 10:06:54.820 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:54 localhost dnsmasq[314832]: exiting on receipt of SIGTERM Dec 2 05:06:54 localhost systemd[1]: libpod-f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4.scope: Deactivated successfully. Dec 2 05:06:54 localhost podman[314850]: 2025-12-02 10:06:54.848860285 +0000 UTC m=+0.070615860 container kill f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:54 localhost podman[314864]: 2025-12-02 10:06:54.930720044 +0000 UTC m=+0.067887997 container died f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:54 localhost podman[314864]: 2025-12-02 10:06:54.961133758 +0000 UTC m=+0.098301621 container cleanup f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:06:54 localhost systemd[1]: libpod-conmon-f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4.scope: Deactivated successfully. Dec 2 05:06:55 localhost podman[314866]: 2025-12-02 10:06:55.012000359 +0000 UTC m=+0.142818512 container remove f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6404ebe-76b1-43df-87e7-66f2b8167fb2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:06:55 localhost kernel: device tapa4104ef1-2f left promiscuous mode Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.026 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.051 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:55.073 263406 INFO neutron.agent.dhcp.agent [None req-e768fa79-8492-4015-a261-3893c4758a69 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:06:55.073 263406 INFO neutron.agent.dhcp.agent [None req-e768fa79-8492-4015-a261-3893c4758a69 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:06:55 localhost systemd[1]: var-lib-containers-storage-overlay-0d598670f9239e3b6596f34e730fcc2feb27320ff94456532e197a96679a822a-merged.mount: Deactivated successfully. Dec 2 05:06:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f42a51183e3cd63134ff68027ab168b27c9435d67c22030f26e0286bac929fb4-userdata-shm.mount: Deactivated successfully. Dec 2 05:06:55 localhost systemd[1]: run-netns-qdhcp\x2dc6404ebe\x2d76b1\x2d43df\x2d87e7\x2d66f2b8167fb2.mount: Deactivated successfully. Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.917 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.933 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.933 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.933 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.934 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:06:55 localhost nova_compute[281854]: 2025-12-02 10:06:55.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:06:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e127 e127: 6 total, 6 up, 6 in Dec 2 05:06:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.635 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.652 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.653 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.653 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.654 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.654 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.670 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.671 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.671 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.672 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:06:56 localhost nova_compute[281854]: 2025-12-02 10:06:56.672 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:06:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:06:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2616537687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.132 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.213 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.214 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:06:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e128 e128: 6 total, 6 up, 6 in Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.464 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.465 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11278MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.466 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.466 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.591 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.591 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.592 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:06:57 localhost nova_compute[281854]: 2025-12-02 10:06:57.654 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:06:58 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:06:58 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3420145848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.079 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.086 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.108 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.111 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.111 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:06:58 localhost nova_compute[281854]: 2025-12-02 10:06:58.285 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:06:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:06:59 localhost podman[314939]: 2025-12-02 10:06:59.455843296 +0000 UTC m=+0.088658533 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 2 05:06:59 localhost podman[314939]: 2025-12-02 10:06:59.474336431 +0000 UTC m=+0.107151678 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 2 05:06:59 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:06:59 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e129 e129: 6 total, 6 up, 6 in Dec 2 05:07:00 localhost nova_compute[281854]: 2025-12-02 10:07:00.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:00 localhost nova_compute[281854]: 2025-12-02 10:07:00.921 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:00 localhost nova_compute[281854]: 2025-12-02 10:07:00.983 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e130 e130: 6 total, 6 up, 6 in Dec 2 05:07:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:02 localhost nova_compute[281854]: 2025-12-02 10:07:02.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:02 localhost nova_compute[281854]: 2025-12-02 10:07:02.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:03.050 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:03.051 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:03.052 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e131 e131: 6 total, 6 up, 6 in Dec 2 05:07:03 localhost nova_compute[281854]: 2025-12-02 10:07:03.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:04 localhost openstack_network_exporter[242845]: ERROR 10:07:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:04 localhost openstack_network_exporter[242845]: ERROR 10:07:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:04 localhost openstack_network_exporter[242845]: ERROR 10:07:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:07:04 localhost openstack_network_exporter[242845]: ERROR 10:07:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:07:04 localhost openstack_network_exporter[242845]: Dec 2 05:07:04 localhost openstack_network_exporter[242845]: ERROR 10:07:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:07:04 localhost openstack_network_exporter[242845]: Dec 2 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:07:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:07:04 localhost podman[314959]: 2025-12-02 10:07:04.432763225 +0000 UTC m=+0.078790198 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:07:04 localhost podman[314959]: 2025-12-02 10:07:04.470107665 +0000 UTC m=+0.116134588 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:07:04 localhost podman[314960]: 2025-12-02 10:07:04.486108953 +0000 UTC m=+0.125685824 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller) Dec 2 05:07:04 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:07:04 localhost podman[314960]: 2025-12-02 10:07:04.52710614 +0000 UTC m=+0.166683071 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:04 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:07:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e132 e132: 6 total, 6 up, 6 in Dec 2 05:07:05 localhost nova_compute[281854]: 2025-12-02 10:07:05.923 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:05 localhost nova_compute[281854]: 2025-12-02 10:07:05.984 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:06 localhost podman[240799]: time="2025-12-02T10:07:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:07:06 localhost podman[240799]: @ - - [02/Dec/2025:10:07:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156111 "" "Go-http-client/1.1" Dec 2 05:07:06 localhost podman[240799]: @ - - [02/Dec/2025:10:07:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19247 "" "Go-http-client/1.1" Dec 2 05:07:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:07:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 8033 writes, 34K keys, 8033 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 8033 writes, 1906 syncs, 4.21 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3169 writes, 12K keys, 3169 commit groups, 1.0 writes per commit group, ingest: 12.35 MB, 0.02 MB/s#012Interval WAL: 3169 writes, 1306 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:07:10 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:10.773 2 INFO neutron.agent.securitygroups_rpc [None req-f9cb0719-6f1e-4498-ae5c-3d1490c7cf9b c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']#033[00m Dec 2 05:07:10 localhost nova_compute[281854]: 2025-12-02 10:07:10.926 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:10 localhost nova_compute[281854]: 2025-12-02 10:07:10.985 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 e133: 6 total, 6 up, 6 in Dec 2 05:07:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:11 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:11.353 2 INFO neutron.agent.securitygroups_rpc [None req-0dd50eb4-1108-4728-8f57-13a5878ac244 c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']#033[00m Dec 2 05:07:11 localhost dnsmasq[314056]: exiting on receipt of SIGTERM Dec 2 05:07:11 localhost podman[315022]: 2025-12-02 10:07:11.798205142 +0000 UTC m=+0.066610393 container kill 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:11 localhost systemd[1]: libpod-99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818.scope: Deactivated successfully. Dec 2 05:07:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:11.826 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 144a91f0-4a98-4ccb-bad3-9780bb2aa0f5 with type ""#033[00m Dec 2 05:07:11 localhost ovn_controller[154505]: 2025-12-02T10:07:11Z|00230|binding|INFO|Removing iface tap9452ee28-23 ovn-installed in OVS Dec 2 05:07:11 localhost ovn_controller[154505]: 2025-12-02T10:07:11Z|00231|binding|INFO|Removing lport 9452ee28-2385-4409-9420-aba511c252a5 ovn-installed in OVS Dec 2 05:07:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:11.828 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.243/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-45b393b4-6935-41d7-9b33-e0a50bae89a0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45b393b4-6935-41d7-9b33-e0a50bae89a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1555f2d-4a9f-4453-a712-6d4c971353c9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9452ee28-2385-4409-9420-aba511c252a5) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:11 localhost nova_compute[281854]: 2025-12-02 10:07:11.828 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:11.831 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9452ee28-2385-4409-9420-aba511c252a5 in datapath 45b393b4-6935-41d7-9b33-e0a50bae89a0 unbound from our chassis#033[00m Dec 2 05:07:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:11.833 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45b393b4-6935-41d7-9b33-e0a50bae89a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:07:11 localhost nova_compute[281854]: 2025-12-02 10:07:11.835 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:11.835 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d7104cf0-2bf7-4791-8e8e-7c79f99ae902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:11 localhost podman[315036]: 2025-12-02 10:07:11.861832564 +0000 UTC m=+0.052069383 container died 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:07:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:11 localhost podman[315036]: 2025-12-02 10:07:11.944600979 +0000 UTC m=+0.134837798 container cleanup 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:11 localhost systemd[1]: libpod-conmon-99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818.scope: Deactivated successfully. Dec 2 05:07:11 localhost podman[315043]: 2025-12-02 10:07:11.96928234 +0000 UTC m=+0.139095943 container remove 99f3b0be960c9abe405b7cae3b1b11db238865a75f90ba42cfe676a03242a818 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-45b393b4-6935-41d7-9b33-e0a50bae89a0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:12 localhost nova_compute[281854]: 2025-12-02 10:07:12.024 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:12 localhost kernel: device tap9452ee28-23 left promiscuous mode Dec 2 05:07:12 localhost nova_compute[281854]: 2025-12-02 10:07:12.046 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:12.061 263406 INFO neutron.agent.dhcp.agent [None req-c1de1da9-7936-44ad-a0fd-32d4afbddc6f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:12.062 263406 INFO neutron.agent.dhcp.agent [None req-c1de1da9-7936-44ad-a0fd-32d4afbddc6f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:12 localhost ovn_controller[154505]: 2025-12-02T10:07:12Z|00232|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:07:12 localhost nova_compute[281854]: 2025-12-02 10:07:12.195 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:12 localhost systemd[1]: var-lib-containers-storage-overlay-1bc83d9e759505b4e4211d0f2aa7f5dccd534965c1725514fdb16919fad32d11-merged.mount: Deactivated successfully. Dec 2 05:07:12 localhost systemd[1]: run-netns-qdhcp\x2d45b393b4\x2d6935\x2d41d7\x2d9b33\x2de0a50bae89a0.mount: Deactivated successfully. Dec 2 05:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:07:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 2705 syncs, 3.78 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4296 writes, 14K keys, 4296 commit groups, 1.0 writes per commit group, ingest: 13.86 MB, 0.02 MB/s#012Interval WAL: 4296 writes, 1841 syncs, 2.33 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:07:15 localhost podman[315065]: 2025-12-02 10:07:15.446895514 +0000 UTC m=+0.086681450 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:15 localhost podman[315065]: 2025-12-02 10:07:15.465672216 +0000 UTC m=+0.105458183 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:15 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:07:15 localhost nova_compute[281854]: 2025-12-02 10:07:15.929 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:15 localhost nova_compute[281854]: 2025-12-02 10:07:15.987 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:07:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3564427763' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:07:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:07:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3564427763' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:07:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:20.070 2 INFO neutron.agent.securitygroups_rpc [None req-0ca3d260-f659-40d6-b699-f106118e6211 f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']#033[00m Dec 2 05:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:07:20 localhost podman[315086]: 2025-12-02 10:07:20.438669298 +0000 UTC m=+0.076076696 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 2 05:07:20 localhost podman[315086]: 2025-12-02 10:07:20.449246061 +0000 UTC m=+0.086653399 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:07:20 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:07:20 localhost nova_compute[281854]: 2025-12-02 10:07:20.934 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.021 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:21 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:21.634 2 INFO neutron.agent.securitygroups_rpc [None req-9c96eaf7-e1a3-4805-a32c-c883041fe7ca f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']#033[00m Dec 2 05:07:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:07:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:07:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:21.758 263406 INFO neutron.agent.linux.ip_lib [None req-3234796a-2d9a-434c-a89f-a4edb2007520 - - - - - -] Device tap308729f2-5c cannot be used as it has no MAC address#033[00m Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.790 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost systemd[1]: tmp-crun.TXZXEP.mount: Deactivated successfully. Dec 2 05:07:21 localhost podman[315106]: 2025-12-02 10:07:21.800795849 +0000 UTC m=+0.101564108 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:07:21 localhost kernel: device tap308729f2-5c entered promiscuous mode Dec 2 05:07:21 localhost ovn_controller[154505]: 2025-12-02T10:07:21Z|00233|binding|INFO|Claiming lport 308729f2-5cef-4da6-a8d1-12678a8ce24b for this chassis. Dec 2 05:07:21 localhost NetworkManager[5965]: [1764670041.8101] manager: (tap308729f2-5c): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Dec 2 05:07:21 localhost ovn_controller[154505]: 2025-12-02T10:07:21Z|00234|binding|INFO|308729f2-5cef-4da6-a8d1-12678a8ce24b: Claiming unknown Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.809 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:21.821 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27cf39916c5c4bc1833487052acaa22a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c696e62-8d3a-4321-b6e9-84ecff5ee056, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=308729f2-5cef-4da6-a8d1-12678a8ce24b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:21.823 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 308729f2-5cef-4da6-a8d1-12678a8ce24b in datapath a69b1d7f-7b6a-4e57-97d2-3e13016a1afd bound to our chassis#033[00m Dec 2 05:07:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:21.825 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a69b1d7f-7b6a-4e57-97d2-3e13016a1afd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:21 localhost systemd-udevd[315145]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:21.826 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[028e13ff-efea-4a9a-91ef-dc64d382f765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:21 localhost ovn_controller[154505]: 2025-12-02T10:07:21Z|00235|binding|INFO|Setting lport 308729f2-5cef-4da6-a8d1-12678a8ce24b ovn-installed in OVS Dec 2 05:07:21 localhost ovn_controller[154505]: 2025-12-02T10:07:21Z|00236|binding|INFO|Setting lport 308729f2-5cef-4da6-a8d1-12678a8ce24b up in Southbound Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.834 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.844 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.855 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost systemd[1]: tmp-crun.nZ2C9J.mount: Deactivated successfully. Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost podman[315105]: 2025-12-02 10:07:21.872674202 +0000 UTC m=+0.178588599 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible) Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost podman[315106]: 2025-12-02 10:07:21.883371159 +0000 UTC m=+0.184139378 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost journal[230136]: ethtool ioctl error on tap308729f2-5c: No such device Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.897 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:21 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:07:21 localhost podman[315105]: 2025-12-02 10:07:21.910462423 +0000 UTC m=+0.216376820 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.) Dec 2 05:07:21 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:07:21 localhost nova_compute[281854]: 2025-12-02 10:07:21.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:22 localhost podman[315295]: Dec 2 05:07:22 localhost podman[315295]: 2025-12-02 10:07:22.869020637 +0000 UTC m=+0.098517877 container create 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:07:22 localhost podman[315295]: 2025-12-02 10:07:22.821767633 +0000 UTC m=+0.051264923 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:22 localhost systemd[1]: Started libpod-conmon-2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f.scope. Dec 2 05:07:22 localhost systemd[1]: Started libcrun container. Dec 2 05:07:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55241c631c38010f44af375554574eb4384ea2395685ed6f7e3376d51f08e8a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:22 localhost podman[315295]: 2025-12-02 10:07:22.958564562 +0000 UTC m=+0.188061802 container init 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:22 localhost podman[315295]: 2025-12-02 10:07:22.968387326 +0000 UTC m=+0.197884576 container start 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:22 localhost dnsmasq[315330]: started, version 2.85 cachesize 150 Dec 2 05:07:22 localhost dnsmasq[315330]: DNS service limited to local subnets Dec 2 05:07:22 localhost dnsmasq[315330]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:22 localhost dnsmasq[315330]: warning: no upstream servers configured Dec 2 05:07:22 localhost dnsmasq-dhcp[315330]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:07:22 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 0 addresses Dec 2 05:07:22 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:22 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:23 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:07:23 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:07:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:23.163 263406 INFO neutron.agent.dhcp.agent [None req-89d967dc-0fa0-4d0c-80ef-94d7aaebaba3 - - - - - -] DHCP configuration for ports {'33a72976-31c7-4593-980e-42e36905a8a1'} is completed#033[00m Dec 2 05:07:23 localhost nova_compute[281854]: 2025-12-02 10:07:23.476 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:25 localhost nova_compute[281854]: 2025-12-02 10:07:25.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:26 localhost nova_compute[281854]: 2025-12-02 10:07:26.023 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:26 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:26.984 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:07:28 localhost nova_compute[281854]: 2025-12-02 10:07:28.334 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:28.611 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:28Z, description=, device_id=d23c300d-2106-463f-ba69-eebcc6860c57, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=084a8cd7-7670-4ca5-8e64-7cf76391b695, ip_allocation=immediate, mac_address=fa:16:3e:41:3f:64, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:19Z, description=, dns_domain=, id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-994176625, port_security_enabled=True, project_id=27cf39916c5c4bc1833487052acaa22a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1587, status=ACTIVE, subnets=['21bb9eae-69df-40d9-8e2c-fcdc977cf0ec'], tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:20Z, vlan_transparent=None, network_id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, port_security_enabled=False, project_id=27cf39916c5c4bc1833487052acaa22a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1661, status=DOWN, tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:28Z on network a69b1d7f-7b6a-4e57-97d2-3e13016a1afd#033[00m Dec 2 05:07:28 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 1 addresses Dec 2 05:07:28 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:28 localhost podman[315348]: 2025-12-02 10:07:28.857884677 +0000 UTC m=+0.070226050 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:07:28 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:28 localhost systemd[1]: tmp-crun.y1r30i.mount: Deactivated successfully. Dec 2 05:07:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:29.049 263406 INFO neutron.agent.dhcp.agent [None req-a9fcff92-5fbd-4c4d-9439-73c8a59bdda5 - - - - - -] DHCP configuration for ports {'084a8cd7-7670-4ca5-8e64-7cf76391b695'} is completed#033[00m Dec 2 05:07:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:30.099 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:28Z, description=, device_id=d23c300d-2106-463f-ba69-eebcc6860c57, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=084a8cd7-7670-4ca5-8e64-7cf76391b695, ip_allocation=immediate, mac_address=fa:16:3e:41:3f:64, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:19Z, description=, dns_domain=, id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-994176625, port_security_enabled=True, project_id=27cf39916c5c4bc1833487052acaa22a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1587, status=ACTIVE, subnets=['21bb9eae-69df-40d9-8e2c-fcdc977cf0ec'], tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:20Z, vlan_transparent=None, network_id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, port_security_enabled=False, project_id=27cf39916c5c4bc1833487052acaa22a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1661, status=DOWN, tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:28Z on network a69b1d7f-7b6a-4e57-97d2-3e13016a1afd#033[00m Dec 2 05:07:30 localhost podman[315387]: 2025-12-02 10:07:30.310038847 +0000 UTC m=+0.059033511 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:07:30 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 1 addresses Dec 2 05:07:30 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:30 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:07:30 localhost podman[315401]: 2025-12-02 10:07:30.426404699 +0000 UTC m=+0.090998485 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:07:30 localhost podman[315401]: 2025-12-02 10:07:30.467440437 +0000 UTC m=+0.132034253 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:07:30 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:07:30 localhost ovn_controller[154505]: 2025-12-02T10:07:30Z|00237|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:07:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:30.615 263406 INFO neutron.agent.dhcp.agent [None req-34b364a2-5ef4-4684-a915-b750ca049285 - - - - - -] DHCP configuration for ports {'084a8cd7-7670-4ca5-8e64-7cf76391b695'} is completed#033[00m Dec 2 05:07:30 localhost nova_compute[281854]: 2025-12-02 10:07:30.615 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:30 localhost nova_compute[281854]: 2025-12-02 10:07:30.941 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:31 localhost nova_compute[281854]: 2025-12-02 10:07:31.025 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:32 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:32.906 2 INFO neutron.agent.securitygroups_rpc [None req-4959ff7d-aca7-46d0-9143-48c9e561106c c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']#033[00m Dec 2 05:07:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:33.373 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:32Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=11cf6714-735d-4d40-b8a7-a3e1e579243a, ip_allocation=immediate, mac_address=fa:16:3e:1e:20:fe, name=tempest-FloatingIPAdminTestJSON-2021538240, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:19Z, description=, dns_domain=, id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-994176625, port_security_enabled=True, project_id=27cf39916c5c4bc1833487052acaa22a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1587, status=ACTIVE, subnets=['21bb9eae-69df-40d9-8e2c-fcdc977cf0ec'], tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:20Z, vlan_transparent=None, network_id=a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, port_security_enabled=True, project_id=27cf39916c5c4bc1833487052acaa22a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['202778bd-7cc5-43e0-846c-ad0385193194'], standard_attr_id=1687, status=DOWN, tags=[], tenant_id=27cf39916c5c4bc1833487052acaa22a, updated_at=2025-12-02T10:07:32Z on network a69b1d7f-7b6a-4e57-97d2-3e13016a1afd#033[00m Dec 2 05:07:33 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 2 addresses Dec 2 05:07:33 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:33 localhost podman[315446]: 2025-12-02 10:07:33.682454287 +0000 UTC m=+0.059973955 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:07:33 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:33 localhost ovn_controller[154505]: 2025-12-02T10:07:33Z|00238|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:07:33 localhost nova_compute[281854]: 2025-12-02 10:07:33.849 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.018 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.020 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:07:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:34.025 263406 INFO neutron.agent.dhcp.agent [None req-e0392596-821e-43c4-80e0-e18c24f994c3 - - - - - -] DHCP configuration for ports {'11cf6714-735d-4d40-b8a7-a3e1e579243a'} is completed#033[00m Dec 2 05:07:34 localhost openstack_network_exporter[242845]: ERROR 10:07:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:34 localhost openstack_network_exporter[242845]: ERROR 10:07:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:07:34 localhost openstack_network_exporter[242845]: ERROR 10:07:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:07:34 localhost openstack_network_exporter[242845]: ERROR 10:07:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:07:34 localhost openstack_network_exporter[242845]: Dec 2 05:07:34 localhost openstack_network_exporter[242845]: ERROR 10:07:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:07:34 localhost openstack_network_exporter[242845]: Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.059 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:34.579 263406 INFO neutron.agent.linux.ip_lib [None req-c4a37e31-b281-45ef-9781-e6100274d33c - - - - - -] Device tap0f8026b8-d6 cannot be used as it has no MAC address#033[00m Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.602 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost kernel: device tap0f8026b8-d6 entered promiscuous mode Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost NetworkManager[5965]: [1764670054.6143] manager: (tap0f8026b8-d6): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Dec 2 05:07:34 localhost ovn_controller[154505]: 2025-12-02T10:07:34Z|00239|binding|INFO|Claiming lport 0f8026b8-d62d-493f-8190-d2e80ee812a4 for this chassis. Dec 2 05:07:34 localhost ovn_controller[154505]: 2025-12-02T10:07:34Z|00240|binding|INFO|0f8026b8-d62d-493f-8190-d2e80ee812a4: Claiming unknown Dec 2 05:07:34 localhost systemd-udevd[315477]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.630 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f8026b8-d62d-493f-8190-d2e80ee812a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.631 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8026b8-d62d-493f-8190-d2e80ee812a4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.632 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:34.633 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[e8053f1e-783f-4d1c-9453-6e9c3ffa9959]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.656 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost ovn_controller[154505]: 2025-12-02T10:07:34Z|00241|binding|INFO|Setting lport 0f8026b8-d62d-493f-8190-d2e80ee812a4 ovn-installed in OVS Dec 2 05:07:34 localhost ovn_controller[154505]: 2025-12-02T10:07:34Z|00242|binding|INFO|Setting lport 0f8026b8-d62d-493f-8190-d2e80ee812a4 up in Southbound Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.662 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.668 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost journal[230136]: ethtool ioctl error on tap0f8026b8-d6: No such device Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.696 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost podman[315482]: 2025-12-02 10:07:34.713641816 +0000 UTC m=+0.055724312 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:07:34 localhost nova_compute[281854]: 2025-12-02 10:07:34.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:34 localhost podman[315482]: 2025-12-02 10:07:34.727936458 +0000 UTC m=+0.070018954 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:07:34 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:07:34 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:34.798 2 INFO neutron.agent.securitygroups_rpc [None req-7ec4eb97-1d35-4cb1-ad23-be6283df01c3 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:34 localhost podman[315484]: 2025-12-02 10:07:34.795533116 +0000 UTC m=+0.131802287 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:34 localhost podman[315484]: 2025-12-02 10:07:34.854533715 +0000 UTC m=+0.190802866 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:07:34 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:07:35 localhost podman[315596]: Dec 2 05:07:35 localhost podman[315596]: 2025-12-02 10:07:35.538078081 +0000 UTC m=+0.075763177 container create 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:35 localhost systemd[1]: Started libpod-conmon-8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79.scope. Dec 2 05:07:35 localhost systemd[1]: Started libcrun container. Dec 2 05:07:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064d138bd0413126e8f09e8b434779c28a6d201de1de908638c213d1831aaab5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:35 localhost podman[315596]: 2025-12-02 10:07:35.502089388 +0000 UTC m=+0.039774484 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:35 localhost podman[315596]: 2025-12-02 10:07:35.613850078 +0000 UTC m=+0.151535154 container init 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:07:35 localhost podman[315596]: 2025-12-02 10:07:35.620080175 +0000 UTC m=+0.157765251 container start 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:35 localhost dnsmasq[315615]: started, version 2.85 cachesize 150 Dec 2 05:07:35 localhost dnsmasq[315615]: DNS service limited to local subnets Dec 2 05:07:35 localhost dnsmasq[315615]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:35 localhost dnsmasq[315615]: warning: no upstream servers configured Dec 2 05:07:35 localhost dnsmasq-dhcp[315615]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:07:35 localhost dnsmasq[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:35 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:35 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:35.678 263406 INFO neutron.agent.dhcp.agent [None req-c4a37e31-b281-45ef-9781-e6100274d33c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c16bd636-0682-4847-ab96-1785d25e2f0c, ip_allocation=immediate, mac_address=fa:16:3e:30:e1:e6, name=tempest-NetworksTestDHCPv6-1739025647, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['5b8ccefb-7e5f-4954-b69b-9a64408e7e8c'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:32Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1691, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:34Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:35 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:35.728 2 INFO neutron.agent.securitygroups_rpc [None req-87d4804a-2e84-429a-b45c-6794fadb1faa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:35.794 263406 INFO neutron.agent.dhcp.agent [None req-471b32fd-095e-4de9-bd9e-ab58bb9e7131 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:07:35 localhost dnsmasq[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:35 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:35 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:35 localhost podman[315634]: 2025-12-02 10:07:35.84379207 +0000 UTC m=+0.062246516 container kill 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:07:35 localhost nova_compute[281854]: 2025-12-02 10:07:35.943 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:36 localhost nova_compute[281854]: 2025-12-02 10:07:36.027 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:36 localhost podman[240799]: time="2025-12-02T10:07:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:07:36 localhost podman[240799]: @ - - [02/Dec/2025:10:07:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157924 "" "Go-http-client/1.1" Dec 2 05:07:36 localhost podman[240799]: @ - - [02/Dec/2025:10:07:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19702 "" "Go-http-client/1.1" Dec 2 05:07:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:36.113 263406 INFO neutron.agent.dhcp.agent [None req-091756a7-1138-4199-913a-fd3b6800d700 - - - - - -] DHCP configuration for ports {'c16bd636-0682-4847-ab96-1785d25e2f0c'} is completed#033[00m Dec 2 05:07:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:36 localhost systemd[1]: tmp-crun.DNSUVa.mount: Deactivated successfully. Dec 2 05:07:36 localhost podman[315669]: 2025-12-02 10:07:36.192803917 +0000 UTC m=+0.044701837 container kill 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:07:36 localhost dnsmasq[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:36 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:36 localhost dnsmasq-dhcp[315615]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:36.385 2 INFO neutron.agent.securitygroups_rpc [None req-d47733e1-0ad6-43a1-b5b9-ff46ea82484c 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']#033[00m Dec 2 05:07:36 localhost dnsmasq[315615]: exiting on receipt of SIGTERM Dec 2 05:07:36 localhost podman[315708]: 2025-12-02 10:07:36.631123423 +0000 UTC m=+0.063050157 container kill 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:07:36 localhost systemd[1]: libpod-8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79.scope: Deactivated successfully. Dec 2 05:07:36 localhost podman[315721]: 2025-12-02 10:07:36.710109796 +0000 UTC m=+0.063656164 container died 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:36 localhost podman[315721]: 2025-12-02 10:07:36.740082178 +0000 UTC m=+0.093628536 container cleanup 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:07:36 localhost systemd[1]: libpod-conmon-8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79.scope: Deactivated successfully. Dec 2 05:07:36 localhost podman[315722]: 2025-12-02 10:07:36.780242392 +0000 UTC m=+0.127944113 container remove 8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:36 localhost ovn_controller[154505]: 2025-12-02T10:07:36Z|00243|binding|INFO|Releasing lport 0f8026b8-d62d-493f-8190-d2e80ee812a4 from this chassis (sb_readonly=0) Dec 2 05:07:36 localhost kernel: device tap0f8026b8-d6 left promiscuous mode Dec 2 05:07:36 localhost ovn_controller[154505]: 2025-12-02T10:07:36Z|00244|binding|INFO|Setting lport 0f8026b8-d62d-493f-8190-d2e80ee812a4 down in Southbound Dec 2 05:07:36 localhost nova_compute[281854]: 2025-12-02 10:07:36.833 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:36 localhost nova_compute[281854]: 2025-12-02 10:07:36.858 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:36.946 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f8026b8-d62d-493f-8190-d2e80ee812a4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:36.948 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0f8026b8-d62d-493f-8190-d2e80ee812a4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:07:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:36.950 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:36.951 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[feb73755-0eb1-4679-8b57-b91193cb404b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:37 localhost systemd[1]: var-lib-containers-storage-overlay-064d138bd0413126e8f09e8b434779c28a6d201de1de908638c213d1831aaab5-merged.mount: Deactivated successfully. Dec 2 05:07:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e2345bb7e79b321f7f69de1dea61d247897e9cd12c8e6dccd2a5b72c4a9ce79-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:37 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:07:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:38.172 263406 INFO neutron.agent.linux.ip_lib [None req-5c1620e9-aa10-42c0-8fb1-7023c95aa6d0 - - - - - -] Device tape072121f-22 cannot be used as it has no MAC address#033[00m Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.243 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost kernel: device tape072121f-22 entered promiscuous mode Dec 2 05:07:38 localhost ovn_controller[154505]: 2025-12-02T10:07:38Z|00245|binding|INFO|Claiming lport e072121f-2250-4257-996f-2505d571a3a6 for this chassis. Dec 2 05:07:38 localhost ovn_controller[154505]: 2025-12-02T10:07:38Z|00246|binding|INFO|e072121f-2250-4257-996f-2505d571a3a6: Claiming unknown Dec 2 05:07:38 localhost NetworkManager[5965]: [1764670058.2508] manager: (tape072121f-22): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.249 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost systemd-udevd[315762]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:38 localhost ovn_controller[154505]: 2025-12-02T10:07:38Z|00247|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 ovn-installed in OVS Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.266 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.284 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.335 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:38.349 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e072121f-2250-4257-996f-2505d571a3a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:38 localhost ovn_controller[154505]: 2025-12-02T10:07:38Z|00248|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 up in Southbound Dec 2 05:07:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:38.351 160221 INFO neutron.agent.ovn.metadata.agent [-] Port e072121f-2250-4257-996f-2505d571a3a6 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:38.353 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:38.354 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[46978b01-3263-4b7a-badb-9d3eba150ad7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:38 localhost nova_compute[281854]: 2025-12-02 10:07:38.371 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:38 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:38.978 2 INFO neutron.agent.securitygroups_rpc [None req-c850bb74-fc0e-4a51-908b-f066332bb7ea 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:39 localhost podman[315833]: Dec 2 05:07:39 localhost podman[315833]: 2025-12-02 10:07:39.205415103 +0000 UTC m=+0.082304192 container create 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:39 localhost systemd[1]: Started libpod-conmon-0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093.scope. Dec 2 05:07:39 localhost podman[315833]: 2025-12-02 10:07:39.169931434 +0000 UTC m=+0.046820503 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:39 localhost systemd[1]: Started libcrun container. Dec 2 05:07:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf763cfd106b91e614a3bf8217f658b71d6696de983741c29fd7dbb50f4fb61d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:39 localhost podman[315833]: 2025-12-02 10:07:39.305046529 +0000 UTC m=+0.181935608 container init 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:39 localhost podman[315833]: 2025-12-02 10:07:39.316758352 +0000 UTC m=+0.193647431 container start 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:07:39 localhost dnsmasq[315851]: started, version 2.85 cachesize 150 Dec 2 05:07:39 localhost dnsmasq[315851]: DNS service limited to local subnets Dec 2 05:07:39 localhost dnsmasq[315851]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:39 localhost dnsmasq[315851]: warning: no upstream servers configured Dec 2 05:07:39 localhost dnsmasq[315851]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:39.383 263406 INFO neutron.agent.dhcp.agent [None req-5c1620e9-aa10-42c0-8fb1-7023c95aa6d0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4c39f2f9-96bf-4f1d-a072-d7adff3db5da, ip_allocation=immediate, mac_address=fa:16:3e:cb:58:35, name=tempest-NetworksTestDHCPv6-848696062, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=4, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['4215fce0-2435-4b41-9600-1b6971be6569'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:36Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1720, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:37Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:39.534 263406 INFO neutron.agent.dhcp.agent [None req-466bf31e-a784-4286-911a-e8a5868942f3 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:07:39 localhost dnsmasq[315851]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:39 localhost podman[315868]: 2025-12-02 10:07:39.580806736 +0000 UTC m=+0.057367676 container kill 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:07:39 localhost ovn_controller[154505]: 2025-12-02T10:07:39Z|00249|binding|INFO|Releasing lport e072121f-2250-4257-996f-2505d571a3a6 from this chassis (sb_readonly=0) Dec 2 05:07:39 localhost nova_compute[281854]: 2025-12-02 10:07:39.748 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:39 localhost kernel: device tape072121f-22 left promiscuous mode Dec 2 05:07:39 localhost ovn_controller[154505]: 2025-12-02T10:07:39Z|00250|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 down in Southbound Dec 2 05:07:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:39.765 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e072121f-2250-4257-996f-2505d571a3a6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:39.767 160221 INFO neutron.agent.ovn.metadata.agent [-] Port e072121f-2250-4257-996f-2505d571a3a6 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:07:39 localhost nova_compute[281854]: 2025-12-02 10:07:39.767 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:39.768 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:39.769 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8707e8-bd51-41c5-a279-bdebd2c6581f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:39.797 263406 INFO neutron.agent.dhcp.agent [None req-f791ce28-6d7d-4c3c-9166-50850a11f7b7 - - - - - -] DHCP configuration for ports {'4c39f2f9-96bf-4f1d-a072-d7adff3db5da'} is completed#033[00m Dec 2 05:07:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:39.895 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:40.022 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:07:40 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:40.176 2 INFO neutron.agent.securitygroups_rpc [None req-7bcf89da-35a6-4e58-8ea7-d94146bd4928 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:40 localhost dnsmasq[315851]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:40 localhost podman[315908]: 2025-12-02 10:07:40.586126861 +0000 UTC m=+0.053360988 container kill 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 7d517d9d-ba68-4c0f-b344-6c3be9d614a4.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape072121f-22 not found in namespace qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4. Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape072121f-22 not found in namespace qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4. Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.615 263406 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.618 263406 INFO neutron.agent.dhcp.agent [None req-c86fd9c9-8c92-4391-b127-f78692fda812 - - - - - -] Synchronizing state#033[00m Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.900 263406 INFO neutron.agent.dhcp.agent [None req-9232755b-b057-437d-b308-8d060aa8cc33 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:07:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:40.901 263406 INFO neutron.agent.dhcp.agent [-] Starting network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 dhcp configuration#033[00m Dec 2 05:07:40 localhost nova_compute[281854]: 2025-12-02 10:07:40.946 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.030 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:41.080 2 INFO neutron.agent.securitygroups_rpc [None req-5e11d272-dc8e-4f26-afbf-45da4f1c93dd c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']#033[00m Dec 2 05:07:41 localhost dnsmasq[315851]: exiting on receipt of SIGTERM Dec 2 05:07:41 localhost podman[315939]: 2025-12-02 10:07:41.08771452 +0000 UTC m=+0.064288761 container kill 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:41 localhost systemd[1]: libpod-0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093.scope: Deactivated successfully. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.091003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061091064, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1966, "num_deletes": 267, "total_data_size": 2549366, "memory_usage": 2596800, "flush_reason": "Manual Compaction"} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061106019, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1653647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22634, "largest_seqno": 24595, "table_properties": {"data_size": 1646223, "index_size": 4318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16375, "raw_average_key_size": 20, "raw_value_size": 1630923, "raw_average_value_size": 2054, "num_data_blocks": 189, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669940, "oldest_key_time": 1764669940, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 15105 microseconds, and 7248 cpu microseconds. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.106100) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1653647 bytes OK Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.106140) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.108100) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.108128) EVENT_LOG_v1 {"time_micros": 1764670061108120, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.108158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2540257, prev total WAL file size 2541006, number of live WAL files 2. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.109084) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303138' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end) Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1614KB)], [36(15MB)] Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061109161, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 18292158, "oldest_snapshot_seqno": -1} Dec 2 05:07:41 localhost podman[315951]: 2025-12-02 10:07:41.170532845 +0000 UTC m=+0.068967085 container died 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:07:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12678 keys, 17928820 bytes, temperature: kUnknown Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061218827, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 17928820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17854048, "index_size": 41967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31749, "raw_key_size": 339637, "raw_average_key_size": 26, "raw_value_size": 17635555, "raw_average_value_size": 1391, "num_data_blocks": 1601, "num_entries": 12678, "num_filter_entries": 12678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.220846) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 17928820 bytes Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.222575) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.6 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 15.9 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(21.9) write-amplify(10.8) OK, records in: 13224, records dropped: 546 output_compression: NoCompression Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.222640) EVENT_LOG_v1 {"time_micros": 1764670061222594, "job": 20, "event": "compaction_finished", "compaction_time_micros": 109780, "compaction_time_cpu_micros": 45994, "output_level": 6, "num_output_files": 1, "total_output_size": 17928820, "num_input_records": 13224, "num_output_records": 12678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061223028, "job": 20, "event": "table_file_deletion", "file_number": 38} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061225537, "job": 20, "event": "table_file_deletion", "file_number": 36} Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.108973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.225690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.225699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.225702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.225706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:07:41.225710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:07:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:41 localhost podman[315951]: 2025-12-02 10:07:41.251411969 +0000 UTC m=+0.149846159 container cleanup 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:07:41 localhost systemd[1]: libpod-conmon-0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093.scope: Deactivated successfully. Dec 2 05:07:41 localhost podman[315958]: 2025-12-02 10:07:41.275562525 +0000 UTC m=+0.155686676 container remove 0b5b640ed8daf95d1ba27bc3c6769ede223cce7e4045475d3e61ebff0eb58093 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:07:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:41.340 263406 INFO neutron.agent.linux.ip_lib [-] Device tape072121f-22 cannot be used as it has no MAC address#033[00m Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.359 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost kernel: device tape072121f-22 entered promiscuous mode Dec 2 05:07:41 localhost NetworkManager[5965]: [1764670061.3646] manager: (tape072121f-22): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.365 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost ovn_controller[154505]: 2025-12-02T10:07:41Z|00251|binding|INFO|Claiming lport e072121f-2250-4257-996f-2505d571a3a6 for this chassis. Dec 2 05:07:41 localhost ovn_controller[154505]: 2025-12-02T10:07:41Z|00252|binding|INFO|e072121f-2250-4257-996f-2505d571a3a6: Claiming unknown Dec 2 05:07:41 localhost systemd-udevd[315985]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:41 localhost ovn_controller[154505]: 2025-12-02T10:07:41Z|00253|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 ovn-installed in OVS Dec 2 05:07:41 localhost ovn_controller[154505]: 2025-12-02T10:07:41Z|00254|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 up in Southbound Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.377 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:41.378 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e072121f-2250-4257-996f-2505d571a3a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:41.379 160221 INFO neutron.agent.ovn.metadata.agent [-] Port e072121f-2250-4257-996f-2505d571a3a6 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:41.380 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:41.381 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[8de6cea5-f25b-4962-8b9e-51a506de5a3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.391 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost journal[230136]: ethtool ioctl error on tape072121f-22: No such device Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.427 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:41 localhost nova_compute[281854]: 2025-12-02 10:07:41.449 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:42 localhost podman[316054]: Dec 2 05:07:42 localhost podman[316054]: 2025-12-02 10:07:42.084659161 +0000 UTC m=+0.096432951 container create 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:07:42 localhost systemd[1]: Started libpod-conmon-44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db.scope. Dec 2 05:07:42 localhost podman[316054]: 2025-12-02 10:07:42.038232849 +0000 UTC m=+0.050006679 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:42 localhost systemd[1]: Started libcrun container. Dec 2 05:07:42 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bfca610369bb0363be82922c5eec6ba87562c0b61ac20065a0acd14246d573/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:42 localhost podman[316054]: 2025-12-02 10:07:42.176030515 +0000 UTC m=+0.187804305 container init 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:07:42 localhost podman[316054]: 2025-12-02 10:07:42.186413663 +0000 UTC m=+0.198187453 container start 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:07:42 localhost dnsmasq[316072]: started, version 2.85 cachesize 150 Dec 2 05:07:42 localhost dnsmasq[316072]: DNS service limited to local subnets Dec 2 05:07:42 localhost dnsmasq[316072]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:42 localhost dnsmasq[316072]: warning: no upstream servers configured Dec 2 05:07:42 localhost dnsmasq[316072]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:42 localhost systemd[1]: var-lib-containers-storage-overlay-bf763cfd106b91e614a3bf8217f658b71d6696de983741c29fd7dbb50f4fb61d-merged.mount: Deactivated successfully. Dec 2 05:07:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:42.251 263406 INFO neutron.agent.dhcp.agent [-] Finished network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 dhcp configuration#033[00m Dec 2 05:07:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:42.252 263406 INFO neutron.agent.dhcp.agent [None req-9232755b-b057-437d-b308-8d060aa8cc33 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:07:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:42.468 263406 INFO neutron.agent.dhcp.agent [None req-eb266d90-0b4a-4720-a38e-2fa2f6803911 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', 'e072121f-2250-4257-996f-2505d571a3a6'} is completed#033[00m Dec 2 05:07:42 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 1 addresses Dec 2 05:07:42 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:42 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:42 localhost podman[316104]: 2025-12-02 10:07:42.53467122 +0000 UTC m=+0.065936365 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:07:42 localhost podman[316117]: 2025-12-02 10:07:42.58777677 +0000 UTC m=+0.071381410 container kill 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:07:42 localhost dnsmasq[316072]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:42 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:42.807 2 INFO neutron.agent.securitygroups_rpc [None req-d64f03a9-f848-43c0-ae5d-2c025d3e76ac 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:43.047 263406 INFO neutron.agent.dhcp.agent [None req-832d7563-db5b-47a9-b1b4-9a0866470304 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', 'e072121f-2250-4257-996f-2505d571a3a6'} is completed#033[00m Dec 2 05:07:43 localhost dnsmasq[316072]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:43 localhost podman[316177]: 2025-12-02 10:07:43.083865852 +0000 UTC m=+0.071751060 container kill 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:43 localhost dnsmasq[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/addn_hosts - 0 addresses Dec 2 05:07:43 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/host Dec 2 05:07:43 localhost podman[316192]: 2025-12-02 10:07:43.128057735 +0000 UTC m=+0.077553095 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:07:43 localhost dnsmasq-dhcp[315330]: read /var/lib/neutron/dhcp/a69b1d7f-7b6a-4e57-97d2-3e13016a1afd/opts Dec 2 05:07:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:43.279 263406 INFO neutron.agent.dhcp.agent [None req-a8f71ee9-d37e-445c-b22b-5ca69a7f864a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=94ca77a3-9f26-4682-9bc1-a9f1d339b3ab, ip_allocation=immediate, mac_address=fa:16:3e:55:87:45, name=tempest-NetworksTestDHCPv6-624163590, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=6, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['cdc6da6b-39a6-4f38-b5f6-c65fbcfe2d84'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:41Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1743, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:42Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:43 localhost ovn_controller[154505]: 2025-12-02T10:07:43Z|00255|binding|INFO|Releasing lport 308729f2-5cef-4da6-a8d1-12678a8ce24b from this chassis (sb_readonly=0) Dec 2 05:07:43 localhost ovn_controller[154505]: 2025-12-02T10:07:43Z|00256|binding|INFO|Setting lport 308729f2-5cef-4da6-a8d1-12678a8ce24b down in Southbound Dec 2 05:07:43 localhost kernel: device tap308729f2-5c left promiscuous mode Dec 2 05:07:43 localhost nova_compute[281854]: 2025-12-02 10:07:43.490 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:43.498 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27cf39916c5c4bc1833487052acaa22a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c696e62-8d3a-4321-b6e9-84ecff5ee056, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=308729f2-5cef-4da6-a8d1-12678a8ce24b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:43.499 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 308729f2-5cef-4da6-a8d1-12678a8ce24b in datapath a69b1d7f-7b6a-4e57-97d2-3e13016a1afd unbound from our chassis#033[00m Dec 2 05:07:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:43.501 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:07:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:43.502 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2656bcfb-0423-455d-ba39-6070df2e894f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:43 localhost nova_compute[281854]: 2025-12-02 10:07:43.518 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:43 localhost podman[316242]: 2025-12-02 10:07:43.537456237 +0000 UTC m=+0.116501607 container kill 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:07:43 localhost systemd[1]: tmp-crun.1KFbZi.mount: Deactivated successfully. Dec 2 05:07:43 localhost dnsmasq[316072]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:43.539 263406 INFO neutron.agent.dhcp.agent [None req-513abf6a-7940-4289-9c91-69d49c5bb937 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', 'e072121f-2250-4257-996f-2505d571a3a6', '94ca77a3-9f26-4682-9bc1-a9f1d339b3ab'} is completed#033[00m Dec 2 05:07:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:43.930 263406 INFO neutron.agent.dhcp.agent [None req-d4832c45-2d9c-474f-ba84-3aa26b005a1f - - - - - -] DHCP configuration for ports {'94ca77a3-9f26-4682-9bc1-a9f1d339b3ab'} is completed#033[00m Dec 2 05:07:44 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:44.321 2 INFO neutron.agent.securitygroups_rpc [None req-4929d793-e537-4c53-a2f5-ddc0b60f2500 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:44 localhost podman[316282]: 2025-12-02 10:07:44.573265418 +0000 UTC m=+0.064178448 container kill 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:07:44 localhost dnsmasq[316072]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:45 localhost dnsmasq[316072]: exiting on receipt of SIGTERM Dec 2 05:07:45 localhost systemd[1]: libpod-44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db.scope: Deactivated successfully. Dec 2 05:07:45 localhost podman[316318]: 2025-12-02 10:07:45.300966747 +0000 UTC m=+0.060616073 container kill 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:45 localhost podman[316330]: 2025-12-02 10:07:45.36722615 +0000 UTC m=+0.055369373 container died 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:45 localhost podman[316330]: 2025-12-02 10:07:45.462188939 +0000 UTC m=+0.150332122 container cleanup 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:45 localhost systemd[1]: libpod-conmon-44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db.scope: Deactivated successfully. Dec 2 05:07:45 localhost podman[316337]: 2025-12-02 10:07:45.487740584 +0000 UTC m=+0.163263410 container remove 44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:07:45 localhost nova_compute[281854]: 2025-12-02 10:07:45.502 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:45 localhost ovn_controller[154505]: 2025-12-02T10:07:45Z|00257|binding|INFO|Releasing lport e072121f-2250-4257-996f-2505d571a3a6 from this chassis (sb_readonly=0) Dec 2 05:07:45 localhost kernel: device tape072121f-22 left promiscuous mode Dec 2 05:07:45 localhost ovn_controller[154505]: 2025-12-02T10:07:45Z|00258|binding|INFO|Setting lport e072121f-2250-4257-996f-2505d571a3a6 down in Southbound Dec 2 05:07:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:45.512 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e072121f-2250-4257-996f-2505d571a3a6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:45.514 160221 INFO neutron.agent.ovn.metadata.agent [-] Port e072121f-2250-4257-996f-2505d571a3a6 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:07:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:45.516 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:45.516 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4759d7-fafa-4b82-a171-c3406f294a6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:45 localhost nova_compute[281854]: 2025-12-02 10:07:45.523 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:45 localhost systemd[1]: tmp-crun.xK0ej0.mount: Deactivated successfully. Dec 2 05:07:45 localhost systemd[1]: var-lib-containers-storage-overlay-b9bfca610369bb0363be82922c5eec6ba87562c0b61ac20065a0acd14246d573-merged.mount: Deactivated successfully. Dec 2 05:07:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44c1ba813977e8bbff0e0dcb62f4f0487d4aa476c73a4c6dfe1f9e4c716dd3db-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:07:45 localhost podman[316358]: 2025-12-02 10:07:45.692543653 +0000 UTC m=+0.092297501 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm) Dec 2 05:07:45 localhost podman[316358]: 2025-12-02 10:07:45.708359465 +0000 UTC m=+0.108113373 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:45 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:07:45 localhost nova_compute[281854]: 2025-12-02 10:07:45.949 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:46 localhost nova_compute[281854]: 2025-12-02 10:07:46.032 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:46 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:07:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:46.242 263406 INFO neutron.agent.dhcp.agent [None req-35a03a92-44c4-4711-a86c-b37cca1320f8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:46 localhost dnsmasq[315330]: exiting on receipt of SIGTERM Dec 2 05:07:46 localhost podman[316397]: 2025-12-02 10:07:46.726308869 +0000 UTC m=+0.063311745 container kill 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:07:46 localhost systemd[1]: tmp-crun.MlY6G6.mount: Deactivated successfully. Dec 2 05:07:46 localhost systemd[1]: libpod-2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f.scope: Deactivated successfully. Dec 2 05:07:46 localhost podman[316411]: 2025-12-02 10:07:46.790936178 +0000 UTC m=+0.048918630 container died 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:46 localhost systemd[1]: var-lib-containers-storage-overlay-55241c631c38010f44af375554574eb4384ea2395685ed6f7e3376d51f08e8a7-merged.mount: Deactivated successfully. Dec 2 05:07:46 localhost systemd-journald[47611]: Data hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.0 (53725 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Dec 2 05:07:46 localhost systemd-journald[47611]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 2 05:07:46 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 05:07:46 localhost podman[316411]: 2025-12-02 10:07:46.894353175 +0000 UTC m=+0.152335547 container remove 2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a69b1d7f-7b6a-4e57-97d2-3e13016a1afd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:46 localhost systemd[1]: libpod-conmon-2622ee810d8e0767daa7057fe51a609ee033814717882cdb0e83d8745d102a7f.scope: Deactivated successfully. Dec 2 05:07:46 localhost rsyslogd[754]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 2 05:07:46 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:46.988 2 INFO neutron.agent.securitygroups_rpc [None req-771b159d-2ba2-4111-b13e-47ca58a8e2e2 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']#033[00m Dec 2 05:07:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:47.162 263406 INFO neutron.agent.linux.ip_lib [None req-2aa4fae5-cf1c-4a3b-b0ca-d4485ba1cf08 - - - - - -] Device tapc0620a36-cb cannot be used as it has no MAC address#033[00m Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.188 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost kernel: device tapc0620a36-cb entered promiscuous mode Dec 2 05:07:47 localhost ovn_controller[154505]: 2025-12-02T10:07:47Z|00259|binding|INFO|Claiming lport c0620a36-cb6e-4025-9457-fbbe48b68e1f for this chassis. Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.197 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost ovn_controller[154505]: 2025-12-02T10:07:47Z|00260|binding|INFO|c0620a36-cb6e-4025-9457-fbbe48b68e1f: Claiming unknown Dec 2 05:07:47 localhost NetworkManager[5965]: [1764670067.2000] manager: (tapc0620a36-cb): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Dec 2 05:07:47 localhost systemd-udevd[316448]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:47.205 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c0620a36-cb6e-4025-9457-fbbe48b68e1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:47.208 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c0620a36-cb6e-4025-9457-fbbe48b68e1f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:47.209 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:47.210 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cd99d2e6-316d-414d-8a72-1b62f05fee9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:47 localhost ovn_controller[154505]: 2025-12-02T10:07:47Z|00261|binding|INFO|Setting lport c0620a36-cb6e-4025-9457-fbbe48b68e1f up in Southbound Dec 2 05:07:47 localhost ovn_controller[154505]: 2025-12-02T10:07:47Z|00262|binding|INFO|Setting lport c0620a36-cb6e-4025-9457-fbbe48b68e1f ovn-installed in OVS Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.214 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:47.221 263406 INFO neutron.agent.dhcp.agent [None req-b201acc6-3060-4c23-b832-8173b49a7048 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:47.222 263406 INFO neutron.agent.dhcp.agent [None req-b201acc6-3060-4c23-b832-8173b49a7048 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.231 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost journal[230136]: ethtool ioctl error on tapc0620a36-cb: No such device Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.269 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.297 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:47 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:47.428 2 INFO neutron.agent.securitygroups_rpc [None req-fa6ee8ca-ed1b-4c8f-b78c-b44d9f9936bd 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:47 localhost systemd[1]: run-netns-qdhcp\x2da69b1d7f\x2d7b6a\x2d4e57\x2d97d2\x2d3e13016a1afd.mount: Deactivated successfully. Dec 2 05:07:47 localhost nova_compute[281854]: 2025-12-02 10:07:47.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:47.934 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:48 localhost podman[316519]: Dec 2 05:07:48 localhost podman[316519]: 2025-12-02 10:07:48.24397705 +0000 UTC m=+0.094684893 container create ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:48 localhost podman[316519]: 2025-12-02 10:07:48.198799572 +0000 UTC m=+0.049507485 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:48 localhost systemd[1]: Started libpod-conmon-ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa.scope. Dec 2 05:07:48 localhost systemd[1]: Started libcrun container. Dec 2 05:07:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec160c84537a63a0711c7b78391e0627f8810916382cc0d287179ccc031e3fe6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:48 localhost podman[316519]: 2025-12-02 10:07:48.333238819 +0000 UTC m=+0.183946682 container init ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:07:48 localhost podman[316519]: 2025-12-02 10:07:48.349567326 +0000 UTC m=+0.200275179 container start ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:48 localhost dnsmasq[316537]: started, version 2.85 cachesize 150 Dec 2 05:07:48 localhost dnsmasq[316537]: DNS service limited to local subnets Dec 2 05:07:48 localhost dnsmasq[316537]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:48 localhost dnsmasq[316537]: warning: no upstream servers configured Dec 2 05:07:48 localhost dnsmasq-dhcp[316537]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:07:48 localhost dnsmasq[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:48 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:48 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:48.425 263406 INFO neutron.agent.dhcp.agent [None req-2aa4fae5-cf1c-4a3b-b0ca-d4485ba1cf08 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=98bfbb74-6d65-460c-be9e-916f678993ba, ip_allocation=immediate, mac_address=fa:16:3e:18:08:18, name=tempest-NetworksTestDHCPv6-1751893532, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=8, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['b7420c97-3129-4103-b655-a67cf1a8fa15'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:45Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1752, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:47Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:48 localhost ovn_controller[154505]: 2025-12-02T10:07:48Z|00263|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:07:48 localhost nova_compute[281854]: 2025-12-02 10:07:48.450 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:48.545 263406 INFO neutron.agent.dhcp.agent [None req-4632cfa4-197c-403e-8cf9-a9a8ad15ea84 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:07:48 localhost dnsmasq[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:48 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:48 localhost podman[316556]: 2025-12-02 10:07:48.637072338 +0000 UTC m=+0.068036902 container kill ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:07:48 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:48.894 263406 INFO neutron.agent.dhcp.agent [None req-fb7e9bec-af17-42f8-a709-f60cc4353cef - - - - - -] DHCP configuration for ports {'98bfbb74-6d65-460c-be9e-916f678993ba'} is completed#033[00m Dec 2 05:07:49 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:49.111 2 INFO neutron.agent.securitygroups_rpc [None req-1c456783-7537-497f-860f-91e236f22124 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:49 localhost dnsmasq[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:49 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:49 localhost dnsmasq-dhcp[316537]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:49 localhost podman[316593]: 2025-12-02 10:07:49.359079203 +0000 UTC m=+0.064255629 container kill ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:07:50 localhost dnsmasq[316537]: exiting on receipt of SIGTERM Dec 2 05:07:50 localhost systemd[1]: tmp-crun.Hp5gM6.mount: Deactivated successfully. Dec 2 05:07:50 localhost podman[316632]: 2025-12-02 10:07:50.383080949 +0000 UTC m=+0.080629148 container kill ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:07:50 localhost systemd[1]: libpod-ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa.scope: Deactivated successfully. Dec 2 05:07:50 localhost podman[316648]: 2025-12-02 10:07:50.457193341 +0000 UTC m=+0.052711940 container died ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:07:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:07:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:50 localhost podman[316648]: 2025-12-02 10:07:50.527787081 +0000 UTC m=+0.123305630 container remove ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:07:50 localhost ovn_controller[154505]: 2025-12-02T10:07:50Z|00264|binding|INFO|Releasing lport c0620a36-cb6e-4025-9457-fbbe48b68e1f from this chassis (sb_readonly=0) Dec 2 05:07:50 localhost ovn_controller[154505]: 2025-12-02T10:07:50Z|00265|binding|INFO|Setting lport c0620a36-cb6e-4025-9457-fbbe48b68e1f down in Southbound Dec 2 05:07:50 localhost nova_compute[281854]: 2025-12-02 10:07:50.541 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:50 localhost kernel: device tapc0620a36-cb left promiscuous mode Dec 2 05:07:50 localhost nova_compute[281854]: 2025-12-02 10:07:50.548 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:50.554 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c0620a36-cb6e-4025-9457-fbbe48b68e1f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:50.555 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c0620a36-cb6e-4025-9457-fbbe48b68e1f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:07:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:50.556 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:50 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:50.557 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[28b79a89-4c3f-423e-9ccc-1f7bc88eb0d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:50 localhost nova_compute[281854]: 2025-12-02 10:07:50.565 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:50 localhost systemd[1]: libpod-conmon-ec0bb76357444177b973f7c93ec0e73a0338ba982d33a484b1a97bd516990cfa.scope: Deactivated successfully. Dec 2 05:07:50 localhost podman[316672]: 2025-12-02 10:07:50.57749442 +0000 UTC m=+0.082603841 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:07:50 localhost podman[316672]: 2025-12-02 10:07:50.583947223 +0000 UTC m=+0.089056604 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:07:50 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:07:50 localhost nova_compute[281854]: 2025-12-02 10:07:50.951 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:51 localhost nova_compute[281854]: 2025-12-02 10:07:51.034 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:51 localhost systemd[1]: var-lib-containers-storage-overlay-ec160c84537a63a0711c7b78391e0627f8810916382cc0d287179ccc031e3fe6-merged.mount: Deactivated successfully. Dec 2 05:07:51 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:07:51 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:51.565 2 INFO neutron.agent.securitygroups_rpc [None req-3a04bc57-bfc9-42ed-a239-801c8326e405 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:51.997 263406 INFO neutron.agent.linux.ip_lib [None req-e1b16a83-8994-4fa7-9c8e-59123f9fe5a9 - - - - - -] Device tap6c308b19-30 cannot be used as it has no MAC address#033[00m Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.021 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost kernel: device tap6c308b19-30 entered promiscuous mode Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.029 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost NetworkManager[5965]: [1764670072.0298] manager: (tap6c308b19-30): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Dec 2 05:07:52 localhost ovn_controller[154505]: 2025-12-02T10:07:52Z|00266|binding|INFO|Claiming lport 6c308b19-30ab-4052-98ab-e96747c0ae90 for this chassis. Dec 2 05:07:52 localhost ovn_controller[154505]: 2025-12-02T10:07:52Z|00267|binding|INFO|6c308b19-30ab-4052-98ab-e96747c0ae90: Claiming unknown Dec 2 05:07:52 localhost systemd-udevd[316701]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:52 localhost ovn_controller[154505]: 2025-12-02T10:07:52Z|00268|binding|INFO|Setting lport 6c308b19-30ab-4052-98ab-e96747c0ae90 ovn-installed in OVS Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.047 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost ovn_controller[154505]: 2025-12-02T10:07:52Z|00269|binding|INFO|Setting lport 6c308b19-30ab-4052-98ab-e96747c0ae90 up in Southbound Dec 2 05:07:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:52.049 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6c308b19-30ab-4052-98ab-e96747c0ae90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:52.052 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6c308b19-30ab-4052-98ab-e96747c0ae90 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:52.054 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:52.056 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6cdd26b8-1b5f-4514-821c-2e776881d07d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.063 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.071 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:07:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.111 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost nova_compute[281854]: 2025-12-02 10:07:52.145 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:52 localhost podman[316705]: 2025-12-02 10:07:52.181734837 +0000 UTC m=+0.087340727 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 2 05:07:52 localhost podman[316705]: 2025-12-02 10:07:52.200342785 +0000 UTC m=+0.105948715 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter) Dec 2 05:07:52 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:07:52 localhost systemd[1]: tmp-crun.kTOOP1.mount: Deactivated successfully. Dec 2 05:07:52 localhost podman[316706]: 2025-12-02 10:07:52.302384175 +0000 UTC m=+0.206318441 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:07:52 localhost podman[316706]: 2025-12-02 10:07:52.336138458 +0000 UTC m=+0.240072664 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:07:52 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:07:52 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:52.379 2 INFO neutron.agent.securitygroups_rpc [None req-b88c63e0-efad-4ee2-bdbd-ab6bd93ed0e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:52 localhost podman[316796]: Dec 2 05:07:52 localhost podman[316796]: 2025-12-02 10:07:52.926164813 +0000 UTC m=+0.071236767 container create a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:07:52 localhost systemd[1]: Started libpod-conmon-a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727.scope. Dec 2 05:07:52 localhost systemd[1]: Started libcrun container. Dec 2 05:07:52 localhost podman[316796]: 2025-12-02 10:07:52.896602972 +0000 UTC m=+0.041674886 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7bccef66fc5b60a58054e9bad7944e15989fa315bde06cc6460ab768eba099/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:53 localhost podman[316796]: 2025-12-02 10:07:53.014459885 +0000 UTC m=+0.159531859 container init a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:53 localhost podman[316796]: 2025-12-02 10:07:53.023720223 +0000 UTC m=+0.168792177 container start a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:07:53 localhost dnsmasq[316814]: started, version 2.85 cachesize 150 Dec 2 05:07:53 localhost dnsmasq[316814]: DNS service limited to local subnets Dec 2 05:07:53 localhost dnsmasq[316814]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:53 localhost dnsmasq[316814]: warning: no upstream servers configured Dec 2 05:07:53 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:53.208 263406 INFO neutron.agent.dhcp.agent [None req-aeacfdd5-7f5f-44aa-9dc8-e786852ac3fb - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:07:53 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:53 localhost podman[316830]: 2025-12-02 10:07:53.39404508 +0000 UTC m=+0.065050171 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:07:53 localhost systemd[1]: tmp-crun.jC4VEj.mount: Deactivated successfully. Dec 2 05:07:53 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:53 localhost podman[316867]: 2025-12-02 10:07:53.840793581 +0000 UTC m=+0.058656830 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:07:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:53.961 263406 INFO neutron.agent.dhcp.agent [None req-b1773246-6ce2-4630-92b3-1885b6ba0b08 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '6c308b19-30ab-4052-98ab-e96747c0ae90'} is completed#033[00m Dec 2 05:07:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:53.981 263406 INFO neutron.agent.dhcp.agent [None req-371e7bc4-1bb3-41d7-ba22-5dd81d317899 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:50Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=26fecc46-bb27-4178-a075-902cfd5a6c9d, ip_allocation=immediate, mac_address=fa:16:3e:14:a4:c7, name=tempest-NetworksTestDHCPv6-1634238789, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=10, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['6726367e-635c-4301-a591-316ca0795570'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:50Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1776, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:51Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:54.022 2 INFO neutron.agent.securitygroups_rpc [None req-1b0d4d6e-60bd-47bc-abae-8825e5c440ec 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:54 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:54 localhost podman[316905]: 2025-12-02 10:07:54.156105397 +0000 UTC m=+0.059249326 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:07:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:54.258 263406 INFO neutron.agent.dhcp.agent [None req-6c17c48d-430e-4f80-8763-874abc010649 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '6c308b19-30ab-4052-98ab-e96747c0ae90'} is completed#033[00m Dec 2 05:07:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:54.304 263406 INFO neutron.agent.dhcp.agent [None req-371e7bc4-1bb3-41d7-ba22-5dd81d317899 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:53Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=862d39f9-3328-4142-b9e7-3246db70a1ad, ip_allocation=immediate, mac_address=fa:16:3e:92:92:e1, name=tempest-NetworksTestDHCPv6-274562090, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=12, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['08c27b5f-e79f-4e4b-9074-ee591cce28a9'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:53Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1786, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:53Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:54.380 263406 INFO neutron.agent.dhcp.agent [None req-5362e9db-c9c9-4caf-b78b-92df89c876c1 - - - - - -] DHCP configuration for ports {'26fecc46-bb27-4178-a075-902cfd5a6c9d'} is completed#033[00m Dec 2 05:07:54 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:07:54 localhost podman[316943]: 2025-12-02 10:07:54.482531799 +0000 UTC m=+0.060646252 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:07:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:54.724 263406 INFO neutron.agent.dhcp.agent [None req-60247a16-660f-4e76-be9f-05c973ed23e2 - - - - - -] DHCP configuration for ports {'862d39f9-3328-4142-b9e7-3246db70a1ad'} is completed#033[00m Dec 2 05:07:54 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:54 localhost systemd[1]: tmp-crun.t5kWew.mount: Deactivated successfully. Dec 2 05:07:54 localhost podman[316982]: 2025-12-02 10:07:54.790368705 +0000 UTC m=+0.050496972 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:07:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:54.810 2 INFO neutron.agent.securitygroups_rpc [None req-f2f34722-a858-417d-bb9f-16583a7fb9bc 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:55 localhost dnsmasq[316814]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:55 localhost podman[317020]: 2025-12-02 10:07:55.103943954 +0000 UTC m=+0.069478970 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:07:55 localhost dnsmasq[316814]: exiting on receipt of SIGTERM Dec 2 05:07:55 localhost podman[317058]: 2025-12-02 10:07:55.662192399 +0000 UTC m=+0.058132196 container kill a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:07:55 localhost systemd[1]: libpod-a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727.scope: Deactivated successfully. Dec 2 05:07:55 localhost podman[317076]: 2025-12-02 10:07:55.732341547 +0000 UTC m=+0.049373673 container died a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:07:55 localhost systemd[1]: tmp-crun.jTu23E.mount: Deactivated successfully. Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:55 localhost podman[317076]: 2025-12-02 10:07:55.834437237 +0000 UTC m=+0.151469313 container remove a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:07:55 localhost systemd[1]: libpod-conmon-a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727.scope: Deactivated successfully. Dec 2 05:07:55 localhost ovn_controller[154505]: 2025-12-02T10:07:55Z|00270|binding|INFO|Releasing lport 6c308b19-30ab-4052-98ab-e96747c0ae90 from this chassis (sb_readonly=0) Dec 2 05:07:55 localhost ovn_controller[154505]: 2025-12-02T10:07:55Z|00271|binding|INFO|Setting lport 6c308b19-30ab-4052-98ab-e96747c0ae90 down in Southbound Dec 2 05:07:55 localhost kernel: device tap6c308b19-30 left promiscuous mode Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.849 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.850 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.850 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.850 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.851 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:07:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:55.854 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6c308b19-30ab-4052-98ab-e96747c0ae90) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:55.856 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6c308b19-30ab-4052-98ab-e96747c0ae90 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:07:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:55.857 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:55.858 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[34be7a7e-3f4f-4c23-b1ab-91a98f348ab1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.869 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:55 localhost nova_compute[281854]: 2025-12-02 10:07:55.954 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.036 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:56.070 263406 INFO neutron.agent.dhcp.agent [None req-fbf7cabc-b893-4cd2-bf1e-11bd5f6844a3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:07:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:07:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:07:56 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4256956907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.272 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.370 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.370 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.605 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.606 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11277MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.607 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.607 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:07:56 localhost systemd[1]: var-lib-containers-storage-overlay-bb7bccef66fc5b60a58054e9bad7944e15989fa315bde06cc6460ab768eba099-merged.mount: Deactivated successfully. Dec 2 05:07:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a41e0e52d32c1b89548a11cf02d386351972898f22d0631e2368bd0bb155a727-userdata-shm.mount: Deactivated successfully. Dec 2 05:07:56 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.932 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.933 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:07:56 localhost nova_compute[281854]: 2025-12-02 10:07:56.933 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.227 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:07:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:57.525 263406 INFO neutron.agent.linux.ip_lib [None req-e080462e-3f44-4e89-ba5a-ed3b2061c837 - - - - - -] Device tap1374f02b-78 cannot be used as it has no MAC address#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.551 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost kernel: device tap1374f02b-78 entered promiscuous mode Dec 2 05:07:57 localhost NetworkManager[5965]: [1764670077.5602] manager: (tap1374f02b-78): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.564 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost systemd-udevd[317151]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:07:57 localhost ovn_controller[154505]: 2025-12-02T10:07:57Z|00272|binding|INFO|Claiming lport 1374f02b-78ae-4718-ac96-c95d5911f385 for this chassis. Dec 2 05:07:57 localhost ovn_controller[154505]: 2025-12-02T10:07:57Z|00273|binding|INFO|1374f02b-78ae-4718-ac96-c95d5911f385: Claiming unknown Dec 2 05:07:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:57.584 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1374f02b-78ae-4718-ac96-c95d5911f385) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:07:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:57.586 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 1374f02b-78ae-4718-ac96-c95d5911f385 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:07:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:57.588 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:07:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:07:57.589 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca94d17-cec6-45b6-a671-c9e949974133]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:07:57 localhost ovn_controller[154505]: 2025-12-02T10:07:57Z|00274|binding|INFO|Setting lport 1374f02b-78ae-4718-ac96-c95d5911f385 ovn-installed in OVS Dec 2 05:07:57 localhost ovn_controller[154505]: 2025-12-02T10:07:57Z|00275|binding|INFO|Setting lport 1374f02b-78ae-4718-ac96-c95d5911f385 up in Southbound Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.592 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.606 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.641 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.669 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:07:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:07:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1390269581' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.774 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.780 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.803 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.806 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:07:57 localhost nova_compute[281854]: 2025-12-02 10:07:57.807 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:07:57 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:57.905 2 INFO neutron.agent.securitygroups_rpc [None req-2105c7c1-c7a9-4dc4-9a73-811f6d407872 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:58 localhost podman[317208]: Dec 2 05:07:58 localhost podman[317208]: 2025-12-02 10:07:58.555426591 +0000 UTC m=+0.100993582 container create ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:07:58 localhost podman[317208]: 2025-12-02 10:07:58.507603532 +0000 UTC m=+0.053170463 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:07:58 localhost systemd[1]: Started libpod-conmon-ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9.scope. Dec 2 05:07:58 localhost systemd[1]: Started libcrun container. Dec 2 05:07:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916d62d032cda511062320377806f59ee00027d3131c76817008c45ca8736e10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:07:58 localhost podman[317208]: 2025-12-02 10:07:58.639835619 +0000 UTC m=+0.185402540 container init ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:07:58 localhost podman[317208]: 2025-12-02 10:07:58.650433113 +0000 UTC m=+0.196000034 container start ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:07:58 localhost dnsmasq[317226]: started, version 2.85 cachesize 150 Dec 2 05:07:58 localhost dnsmasq[317226]: DNS service limited to local subnets Dec 2 05:07:58 localhost dnsmasq[317226]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:07:58 localhost dnsmasq[317226]: warning: no upstream servers configured Dec 2 05:07:58 localhost dnsmasq-dhcp[317226]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:07:58 localhost dnsmasq[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:58 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:58 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:58.714 263406 INFO neutron.agent.dhcp.agent [None req-e080462e-3f44-4e89-ba5a-ed3b2061c837 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:57Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fd805fb2-3db5-4232-89a5-c0a4f0358d1c, ip_allocation=immediate, mac_address=fa:16:3e:c7:34:a5, name=tempest-NetworksTestDHCPv6-395330461, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=14, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['6b7f0265-c40a-4328-926b-3221114b8b73'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:56Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1829, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:07:57Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:07:58 localhost dnsmasq[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:07:58 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:58 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:58 localhost podman[317245]: 2025-12-02 10:07:58.907385698 +0000 UTC m=+0.059344739 container kill ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:07:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:58.917 263406 INFO neutron.agent.dhcp.agent [None req-7e132dcc-b1ce-4a63-b4f6-6a8c309fa147 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:07:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:07:59.277 263406 INFO neutron.agent.dhcp.agent [None req-0bec8322-6a2f-45f9-b6fa-7909f652b8fa - - - - - -] DHCP configuration for ports {'fd805fb2-3db5-4232-89a5-c0a4f0358d1c'} is completed#033[00m Dec 2 05:07:59 localhost neutron_sriov_agent[256494]: 2025-12-02 10:07:59.457 2 INFO neutron.agent.securitygroups_rpc [None req-abbcf6d8-096d-46e2-96f3-3a8543ab77e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:07:59 localhost systemd[1]: tmp-crun.B96ZiP.mount: Deactivated successfully. Dec 2 05:07:59 localhost dnsmasq[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:07:59 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:07:59 localhost podman[317281]: 2025-12-02 10:07:59.676040041 +0000 UTC m=+0.072856100 container kill ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:07:59 localhost dnsmasq-dhcp[317226]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:07:59 localhost nova_compute[281854]: 2025-12-02 10:07:59.808 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:07:59 localhost nova_compute[281854]: 2025-12-02 10:07:59.808 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:07:59 localhost nova_compute[281854]: 2025-12-02 10:07:59.809 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.313 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.314 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.314 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.314 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:08:00 localhost dnsmasq[317226]: exiting on receipt of SIGTERM Dec 2 05:08:00 localhost podman[317318]: 2025-12-02 10:08:00.435744006 +0000 UTC m=+0.062428291 container kill ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:00 localhost systemd[1]: libpod-ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9.scope: Deactivated successfully. Dec 2 05:08:00 localhost podman[317330]: 2025-12-02 10:08:00.524686005 +0000 UTC m=+0.074810191 container died ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:08:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:08:00 localhost podman[317330]: 2025-12-02 10:08:00.561027848 +0000 UTC m=+0.111151984 container cleanup ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:08:00 localhost systemd[1]: var-lib-containers-storage-overlay-916d62d032cda511062320377806f59ee00027d3131c76817008c45ca8736e10-merged.mount: Deactivated successfully. Dec 2 05:08:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:00 localhost systemd[1]: libpod-conmon-ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9.scope: Deactivated successfully. Dec 2 05:08:00 localhost podman[317332]: 2025-12-02 10:08:00.598924422 +0000 UTC m=+0.142018261 container remove ae9444f9a54a06ee96e3610b624615322c84b4fa1a226fce64ecd09c57b5bea9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:08:00 localhost kernel: device tap1374f02b-78 left promiscuous mode Dec 2 05:08:00 localhost ovn_controller[154505]: 2025-12-02T10:08:00Z|00276|binding|INFO|Releasing lport 1374f02b-78ae-4718-ac96-c95d5911f385 from this chassis (sb_readonly=0) Dec 2 05:08:00 localhost ovn_controller[154505]: 2025-12-02T10:08:00Z|00277|binding|INFO|Setting lport 1374f02b-78ae-4718-ac96-c95d5911f385 down in Southbound Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.610 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:00.619 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1374f02b-78ae-4718-ac96-c95d5911f385) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:00.621 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 1374f02b-78ae-4718-ac96-c95d5911f385 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:00.622 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:00.623 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6003d9-6e45-4899-bcdc-d66a60e1fa56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.632 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:00 localhost systemd[1]: tmp-crun.EzPhgK.mount: Deactivated successfully. Dec 2 05:08:00 localhost podman[317358]: 2025-12-02 10:08:00.646972957 +0000 UTC m=+0.088943631 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd) Dec 2 05:08:00 localhost podman[317358]: 2025-12-02 10:08:00.688975211 +0000 UTC m=+0.130945875 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd) Dec 2 05:08:00 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:08:00 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:00 localhost nova_compute[281854]: 2025-12-02 10:08:00.958 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.038 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.156 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:08:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.189 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.189 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.190 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:01.728 263406 INFO neutron.agent.linux.ip_lib [None req-36be447b-ea0e-4f8b-8fba-2471c99f5eb4 - - - - - -] Device tapb4564215-e5 cannot be used as it has no MAC address#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.796 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost kernel: device tapb4564215-e5 entered promiscuous mode Dec 2 05:08:01 localhost ovn_controller[154505]: 2025-12-02T10:08:01Z|00278|binding|INFO|Claiming lport b4564215-e5ad-45ee-8436-ae119e1d9e06 for this chassis. Dec 2 05:08:01 localhost NetworkManager[5965]: [1764670081.8051] manager: (tapb4564215-e5): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Dec 2 05:08:01 localhost ovn_controller[154505]: 2025-12-02T10:08:01Z|00279|binding|INFO|b4564215-e5ad-45ee-8436-ae119e1d9e06: Claiming unknown Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.805 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost systemd-udevd[317391]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:01 localhost ovn_controller[154505]: 2025-12-02T10:08:01Z|00280|binding|INFO|Setting lport b4564215-e5ad-45ee-8436-ae119e1d9e06 up in Southbound Dec 2 05:08:01 localhost ovn_controller[154505]: 2025-12-02T10:08:01Z|00281|binding|INFO|Setting lport b4564215-e5ad-45ee-8436-ae119e1d9e06 ovn-installed in OVS Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:01.815 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4564215-e5ad-45ee-8436-ae119e1d9e06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:01 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:01.818 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b4564215-e5ad-45ee-8436-ae119e1d9e06 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:01 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:01.820 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:01 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:01.821 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ec247fca-43d6-4b02-bcbd-a5581ff0ec04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.837 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.844 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost journal[230136]: ethtool ioctl error on tapb4564215-e5: No such device Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.882 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:01 localhost nova_compute[281854]: 2025-12-02 10:08:01.920 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e134 e134: 6 total, 6 up, 6 in Dec 2 05:08:02 localhost podman[317462]: Dec 2 05:08:02 localhost podman[317462]: 2025-12-02 10:08:02.822828418 +0000 UTC m=+0.085952111 container create 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:02 localhost nova_compute[281854]: 2025-12-02 10:08:02.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:02 localhost nova_compute[281854]: 2025-12-02 10:08:02.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:02 localhost nova_compute[281854]: 2025-12-02 10:08:02.847 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:02 localhost nova_compute[281854]: 2025-12-02 10:08:02.848 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:02 localhost nova_compute[281854]: 2025-12-02 10:08:02.848 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:08:02 localhost systemd[1]: Started libpod-conmon-9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a.scope. Dec 2 05:08:02 localhost podman[317462]: 2025-12-02 10:08:02.782716634 +0000 UTC m=+0.045840337 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:02 localhost systemd[1]: Started libcrun container. Dec 2 05:08:02 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6549eae79605d48e71d9eff56b3544cee41554d4da99836d1204d64a2125610/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:02 localhost podman[317462]: 2025-12-02 10:08:02.902001226 +0000 UTC m=+0.165124929 container init 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:02 localhost podman[317462]: 2025-12-02 10:08:02.914641434 +0000 UTC m=+0.177765137 container start 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:02 localhost dnsmasq[317479]: started, version 2.85 cachesize 150 Dec 2 05:08:02 localhost dnsmasq[317479]: DNS service limited to local subnets Dec 2 05:08:02 localhost dnsmasq[317479]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:02 localhost dnsmasq[317479]: warning: no upstream servers configured Dec 2 05:08:02 localhost dnsmasq-dhcp[317479]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:02 localhost dnsmasq[317479]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:02 localhost dnsmasq-dhcp[317479]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:02 localhost dnsmasq-dhcp[317479]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.051 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.052 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.053 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:08:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:03.076 263406 INFO neutron.agent.dhcp.agent [None req-7ca79e0d-3e91-40ab-8fe5-95ee712c3a95 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:03 localhost dnsmasq[317479]: exiting on receipt of SIGTERM Dec 2 05:08:03 localhost podman[317498]: 2025-12-02 10:08:03.26177628 +0000 UTC m=+0.063697504 container kill 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:08:03 localhost systemd[1]: libpod-9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a.scope: Deactivated successfully. Dec 2 05:08:03 localhost podman[317511]: 2025-12-02 10:08:03.342349836 +0000 UTC m=+0.065988517 container died 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:08:03 localhost podman[317511]: 2025-12-02 10:08:03.373289144 +0000 UTC m=+0.096927785 container cleanup 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:08:03 localhost systemd[1]: libpod-conmon-9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a.scope: Deactivated successfully. Dec 2 05:08:03 localhost podman[317513]: 2025-12-02 10:08:03.417878167 +0000 UTC m=+0.131103969 container remove 9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:03 localhost ovn_controller[154505]: 2025-12-02T10:08:03Z|00282|binding|INFO|Releasing lport b4564215-e5ad-45ee-8436-ae119e1d9e06 from this chassis (sb_readonly=0) Dec 2 05:08:03 localhost ovn_controller[154505]: 2025-12-02T10:08:03Z|00283|binding|INFO|Setting lport b4564215-e5ad-45ee-8436-ae119e1d9e06 down in Southbound Dec 2 05:08:03 localhost nova_compute[281854]: 2025-12-02 10:08:03.430 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:03 localhost kernel: device tapb4564215-e5 left promiscuous mode Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.438 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4564215-e5ad-45ee-8436-ae119e1d9e06) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.440 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b4564215-e5ad-45ee-8436-ae119e1d9e06 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.441 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:03.442 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[575d9f4f-32f0-41ff-b5e6-9d0d16982ae4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:03 localhost nova_compute[281854]: 2025-12-02 10:08:03.454 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e135 e135: 6 total, 6 up, 6 in Dec 2 05:08:03 localhost systemd[1]: var-lib-containers-storage-overlay-e6549eae79605d48e71d9eff56b3544cee41554d4da99836d1204d64a2125610-merged.mount: Deactivated successfully. Dec 2 05:08:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c1991448b05466eceb902376daedc49a59dd72caeac87c5632d5f49d7c6b05a-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:03 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:04 localhost openstack_network_exporter[242845]: ERROR 10:08:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:04 localhost openstack_network_exporter[242845]: ERROR 10:08:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:08:04 localhost openstack_network_exporter[242845]: ERROR 10:08:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:04 localhost openstack_network_exporter[242845]: ERROR 10:08:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:08:04 localhost openstack_network_exporter[242845]: Dec 2 05:08:04 localhost openstack_network_exporter[242845]: ERROR 10:08:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:08:04 localhost openstack_network_exporter[242845]: Dec 2 05:08:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:04.633 263406 INFO neutron.agent.linux.ip_lib [None req-c75d3854-3022-40bb-a2c1-5e68a7c768e5 - - - - - -] Device tap679c782c-2b cannot be used as it has no MAC address#033[00m Dec 2 05:08:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e136 e136: 6 total, 6 up, 6 in Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.662 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost kernel: device tap679c782c-2b entered promiscuous mode Dec 2 05:08:04 localhost ovn_controller[154505]: 2025-12-02T10:08:04Z|00284|binding|INFO|Claiming lport 679c782c-2b17-40db-9e68-0b3c95332c3f for this chassis. Dec 2 05:08:04 localhost ovn_controller[154505]: 2025-12-02T10:08:04Z|00285|binding|INFO|679c782c-2b17-40db-9e68-0b3c95332c3f: Claiming unknown Dec 2 05:08:04 localhost NetworkManager[5965]: [1764670084.6710] manager: (tap679c782c-2b): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.670 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost ovn_controller[154505]: 2025-12-02T10:08:04Z|00286|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f ovn-installed in OVS Dec 2 05:08:04 localhost ovn_controller[154505]: 2025-12-02T10:08:04Z|00287|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f up in Southbound Dec 2 05:08:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:04.687 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=679c782c-2b17-40db-9e68-0b3c95332c3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.687 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:04.690 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 679c782c-2b17-40db-9e68-0b3c95332c3f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:04.692 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:04.694 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4d600042-eb86-4f7a-8cc2-27e5cc48631e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.708 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost journal[230136]: ethtool ioctl error on tap679c782c-2b: No such device Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.746 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:04.777 2 INFO neutron.agent.securitygroups_rpc [None req-22f3ee62-f7aa-4000-8792-d140ffb54960 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']#033[00m Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.781 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.839 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:04 localhost nova_compute[281854]: 2025-12-02 10:08:04.839 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:08:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:08:05 localhost systemd[1]: tmp-crun.fJcQ2O.mount: Deactivated successfully. Dec 2 05:08:05 localhost podman[317601]: 2025-12-02 10:08:05.449842207 +0000 UTC m=+0.085043565 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller) Dec 2 05:08:05 localhost podman[317600]: 2025-12-02 10:08:05.426826092 +0000 UTC m=+0.069253604 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:08:05 localhost podman[317601]: 2025-12-02 10:08:05.504984403 +0000 UTC m=+0.140185701 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:05 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:08:05 localhost podman[317600]: 2025-12-02 10:08:05.560598591 +0000 UTC m=+0.203026023 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:08:05 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:08:05 localhost podman[317668]: Dec 2 05:08:05 localhost podman[317668]: 2025-12-02 10:08:05.729679794 +0000 UTC m=+0.091822928 container create 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:05 localhost systemd[1]: Started libpod-conmon-79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95.scope. Dec 2 05:08:05 localhost podman[317668]: 2025-12-02 10:08:05.680942541 +0000 UTC m=+0.043085695 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:05 localhost systemd[1]: Started libcrun container. Dec 2 05:08:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082f904709c06aad040c93827c634ea3c26e9324146f3ba6d2bffe5cb021019d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:05 localhost podman[317668]: 2025-12-02 10:08:05.803060477 +0000 UTC m=+0.165203621 container init 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:05 localhost podman[317668]: 2025-12-02 10:08:05.814180975 +0000 UTC m=+0.176324109 container start 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:08:05 localhost dnsmasq[317686]: started, version 2.85 cachesize 150 Dec 2 05:08:05 localhost dnsmasq[317686]: DNS service limited to local subnets Dec 2 05:08:05 localhost dnsmasq[317686]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:05 localhost dnsmasq[317686]: warning: no upstream servers configured Dec 2 05:08:05 localhost dnsmasq-dhcp[317686]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:05 localhost dnsmasq[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:05 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:05 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:05.914 263406 INFO neutron.agent.dhcp.agent [None req-c75d3854-3022-40bb-a2c1-5e68a7c768e5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:04Z, description=, device_id=5d9ffd1e-177f-41cc-b69e-dee4698e0c88, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=40102fd5-d7e2-4d44-b447-185810364f71, ip_allocation=immediate, mac_address=fa:16:3e:08:88:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['660e51c9-d82a-4643-a274-dee902233c50'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:03Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=False, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1883, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:04Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:05 localhost nova_compute[281854]: 2025-12-02 10:08:05.961 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:06 localhost podman[240799]: time="2025-12-02T10:08:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:08:06 localhost nova_compute[281854]: 2025-12-02 10:08:06.039 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.049 263406 INFO neutron.agent.dhcp.agent [None req-9ac91f9e-c570-45fd-8942-8e5b7e1e60ee - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:06 localhost podman[240799]: @ - - [02/Dec/2025:10:08:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156100 "" "Go-http-client/1.1" Dec 2 05:08:06 localhost podman[240799]: @ - - [02/Dec/2025:10:08:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19237 "" "Go-http-client/1.1" Dec 2 05:08:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:06.102 2 INFO neutron.agent.securitygroups_rpc [None req-c8dc5996-311b-454a-bef8-be44e05069d7 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']#033[00m Dec 2 05:08:06 localhost dnsmasq[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:08:06 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:06 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:06 localhost podman[317704]: 2025-12-02 10:08:06.147596012 +0000 UTC m=+0.096631333 container kill 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:06 localhost ovn_controller[154505]: 2025-12-02T10:08:06Z|00288|binding|INFO|Releasing lport 679c782c-2b17-40db-9e68-0b3c95332c3f from this chassis (sb_readonly=0) Dec 2 05:08:06 localhost nova_compute[281854]: 2025-12-02 10:08:06.320 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:06 localhost ovn_controller[154505]: 2025-12-02T10:08:06Z|00289|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f down in Southbound Dec 2 05:08:06 localhost kernel: device tap679c782c-2b left promiscuous mode Dec 2 05:08:06 localhost nova_compute[281854]: 2025-12-02 10:08:06.342 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:06.498 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=679c782c-2b17-40db-9e68-0b3c95332c3f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:06.501 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 679c782c-2b17-40db-9e68-0b3c95332c3f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:06.502 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:06 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:06.503 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6df68703-dacf-456e-b8d5-70dfa04e8b90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.522 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:04Z, description=, device_id=5d9ffd1e-177f-41cc-b69e-dee4698e0c88, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=40102fd5-d7e2-4d44-b447-185810364f71, ip_allocation=immediate, mac_address=fa:16:3e:08:88:92, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['660e51c9-d82a-4643-a274-dee902233c50'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:03Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=False, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1883, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:04Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.563 263406 INFO neutron.agent.dhcp.agent [None req-25f35ce1-83ad-464f-a42a-5cad09666ec5 - - - - - -] DHCP configuration for ports {'40102fd5-d7e2-4d44-b447-185810364f71'} is completed#033[00m Dec 2 05:08:06 localhost dnsmasq[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:08:06 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:06 localhost podman[317744]: 2025-12-02 10:08:06.860599752 +0000 UTC m=+0.199186108 container kill 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:08:06 localhost dnsmasq-dhcp[317686]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e137 e137: 6 total, 6 up, 6 in Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 7d517d9d-ba68-4c0f-b344-6c3be9d614a4.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap679c782c-2b not found in namespace qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4. Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap679c782c-2b not found in namespace qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4. Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.893 263406 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:08:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:06.896 263406 INFO neutron.agent.dhcp.agent [None req-9232755b-b057-437d-b308-8d060aa8cc33 - - - - - -] Synchronizing state#033[00m Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.164 263406 INFO neutron.agent.dhcp.agent [None req-442f8f65-bc7e-44bc-888d-3c567e35c336 - - - - - -] DHCP configuration for ports {'40102fd5-d7e2-4d44-b447-185810364f71'} is completed#033[00m Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.251 263406 INFO neutron.agent.dhcp.agent [None req-df0a516e-2ba9-46d1-bdd1-2505dc3dca33 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.252 263406 INFO neutron.agent.dhcp.agent [-] Starting network 207d2359-3afb-4aa2-9836-cfce83873d96 dhcp configuration#033[00m Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.253 263406 INFO neutron.agent.dhcp.agent [-] Finished network 207d2359-3afb-4aa2-9836-cfce83873d96 dhcp configuration#033[00m Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.254 263406 INFO neutron.agent.dhcp.agent [-] Starting network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 dhcp configuration#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.439 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost dnsmasq[317686]: exiting on receipt of SIGTERM Dec 2 05:08:07 localhost systemd[1]: libpod-79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95.scope: Deactivated successfully. Dec 2 05:08:07 localhost podman[317775]: 2025-12-02 10:08:07.480363998 +0000 UTC m=+0.103546871 container kill 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:07.528 2 INFO neutron.agent.securitygroups_rpc [None req-99b1e585-32ae-4cc8-9a4d-b88a12900723 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:07 localhost podman[317791]: 2025-12-02 10:08:07.54911036 +0000 UTC m=+0.052463659 container died 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:08:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:07 localhost systemd[1]: var-lib-containers-storage-overlay-082f904709c06aad040c93827c634ea3c26e9324146f3ba6d2bffe5cb021019d-merged.mount: Deactivated successfully. Dec 2 05:08:07 localhost podman[317791]: 2025-12-02 10:08:07.596079101 +0000 UTC m=+0.099432380 container remove 79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:07.633 263406 INFO neutron.agent.linux.ip_lib [-] Device tap679c782c-2b cannot be used as it has no MAC address#033[00m Dec 2 05:08:07 localhost systemd[1]: libpod-conmon-79185058bbd6d7df07adf4ff88d4db7de0da968b34b710c2de74f2dcbc607d95.scope: Deactivated successfully. Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.653 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost kernel: device tap679c782c-2b entered promiscuous mode Dec 2 05:08:07 localhost ovn_controller[154505]: 2025-12-02T10:08:07Z|00290|binding|INFO|Claiming lport 679c782c-2b17-40db-9e68-0b3c95332c3f for this chassis. Dec 2 05:08:07 localhost NetworkManager[5965]: [1764670087.6602] manager: (tap679c782c-2b): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Dec 2 05:08:07 localhost ovn_controller[154505]: 2025-12-02T10:08:07Z|00291|binding|INFO|679c782c-2b17-40db-9e68-0b3c95332c3f: Claiming unknown Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.660 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost ovn_controller[154505]: 2025-12-02T10:08:07Z|00292|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f ovn-installed in OVS Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.666 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost ovn_controller[154505]: 2025-12-02T10:08:07Z|00293|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f up in Southbound Dec 2 05:08:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:07.669 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=679c782c-2b17-40db-9e68-0b3c95332c3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:07.671 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 679c782c-2b17-40db-9e68-0b3c95332c3f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.670 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:07.672 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:07.673 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[edc0eae8-77a9-4408-93e9-f30a2cb7097d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.699 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.727 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.757 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:08:07 localhost nova_compute[281854]: 2025-12-02 10:08:07.842 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:08:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e138 e138: 6 total, 6 up, 6 in Dec 2 05:08:08 localhost podman[317874]: Dec 2 05:08:08 localhost podman[317874]: 2025-12-02 10:08:08.447973864 +0000 UTC m=+0.073162932 container create bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:08 localhost systemd[1]: Started libpod-conmon-bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d.scope. Dec 2 05:08:08 localhost systemd[1]: Started libcrun container. Dec 2 05:08:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72541e5e7751300a91567c94e6dac3a39f1f6779176b58ffc5f771b9c7663f69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:08 localhost podman[317874]: 2025-12-02 10:08:08.514553897 +0000 UTC m=+0.139742925 container init bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:08 localhost podman[317874]: 2025-12-02 10:08:08.420135322 +0000 UTC m=+0.045324350 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:08 localhost podman[317874]: 2025-12-02 10:08:08.52363742 +0000 UTC m=+0.148826448 container start bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:08 localhost systemd[1]: tmp-crun.TdNtZ8.mount: Deactivated successfully. Dec 2 05:08:08 localhost dnsmasq[317892]: started, version 2.85 cachesize 150 Dec 2 05:08:08 localhost dnsmasq[317892]: DNS service limited to local subnets Dec 2 05:08:08 localhost dnsmasq[317892]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:08 localhost dnsmasq[317892]: warning: no upstream servers configured Dec 2 05:08:08 localhost dnsmasq-dhcp[317892]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:08 localhost dnsmasq[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:08:08 localhost dnsmasq-dhcp[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:08 localhost dnsmasq-dhcp[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:08.564 263406 INFO neutron.agent.dhcp.agent [-] Finished network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 dhcp configuration#033[00m Dec 2 05:08:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:08.565 263406 INFO neutron.agent.dhcp.agent [None req-df0a516e-2ba9-46d1-bdd1-2505dc3dca33 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:08:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:08.633 263406 INFO neutron.agent.dhcp.agent [None req-6e7af5c8-b9eb-4254-a30a-6043ff159257 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '679c782c-2b17-40db-9e68-0b3c95332c3f', '40102fd5-d7e2-4d44-b447-185810364f71'} is completed#033[00m Dec 2 05:08:08 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:08.756 2 INFO neutron.agent.securitygroups_rpc [None req-c7a539d4-2f79-4a17-aaa4-1046dc1167cd b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:08 localhost dnsmasq[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:08 localhost dnsmasq-dhcp[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:08 localhost dnsmasq-dhcp[317892]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:08 localhost podman[317910]: 2025-12-02 10:08:08.832119951 +0000 UTC m=+0.046015138 container kill bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e139 e139: 6 total, 6 up, 6 in Dec 2 05:08:09 localhost dnsmasq[317892]: exiting on receipt of SIGTERM Dec 2 05:08:09 localhost podman[317949]: 2025-12-02 10:08:09.83630152 +0000 UTC m=+0.062352812 container kill bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:09 localhost systemd[1]: libpod-bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d.scope: Deactivated successfully. Dec 2 05:08:09 localhost podman[317962]: 2025-12-02 10:08:09.908860394 +0000 UTC m=+0.058883790 container died bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:09 localhost podman[317962]: 2025-12-02 10:08:09.941066283 +0000 UTC m=+0.091089599 container cleanup bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:08:09 localhost systemd[1]: libpod-conmon-bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d.scope: Deactivated successfully. Dec 2 05:08:09 localhost podman[317964]: 2025-12-02 10:08:09.988480275 +0000 UTC m=+0.130729154 container remove bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:10 localhost kernel: device tap679c782c-2b left promiscuous mode Dec 2 05:08:10 localhost ovn_controller[154505]: 2025-12-02T10:08:10Z|00294|binding|INFO|Releasing lport 679c782c-2b17-40db-9e68-0b3c95332c3f from this chassis (sb_readonly=0) Dec 2 05:08:10 localhost ovn_controller[154505]: 2025-12-02T10:08:10Z|00295|binding|INFO|Setting lport 679c782c-2b17-40db-9e68-0b3c95332c3f down in Southbound Dec 2 05:08:10 localhost nova_compute[281854]: 2025-12-02 10:08:10.047 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:10 localhost nova_compute[281854]: 2025-12-02 10:08:10.070 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:10 localhost systemd[1]: var-lib-containers-storage-overlay-72541e5e7751300a91567c94e6dac3a39f1f6779176b58ffc5f771b9c7663f69-merged.mount: Deactivated successfully. Dec 2 05:08:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd56c53a3dfe32b2fedd6402a9f79047a076a93090fba003a3dfbaefbb78dd5d-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:10.587 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=679c782c-2b17-40db-9e68-0b3c95332c3f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:10.590 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 679c782c-2b17-40db-9e68-0b3c95332c3f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:10.592 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:10.592 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[407eea56-3701-4e23-8403-c8418eb79aa4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:10 localhost ovn_controller[154505]: 2025-12-02T10:08:10Z|00296|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:08:10 localhost nova_compute[281854]: 2025-12-02 10:08:10.678 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:10 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:10.872 263406 INFO neutron.agent.dhcp.agent [None req-fa5cad18-1374-4582-bb12-ddaed479de5e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:10 localhost nova_compute[281854]: 2025-12-02 10:08:10.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.042 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e140 e140: 6 total, 6 up, 6 in Dec 2 05:08:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:11.681 263406 INFO neutron.agent.linux.ip_lib [None req-99fbca4b-9d33-4ffb-9300-cf2db7cf92e6 - - - - - -] Device tap0983994a-c5 cannot be used as it has no MAC address#033[00m Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.697 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost kernel: device tap0983994a-c5 entered promiscuous mode Dec 2 05:08:11 localhost ovn_controller[154505]: 2025-12-02T10:08:11Z|00297|binding|INFO|Claiming lport 0983994a-c588-4cb2-8149-c4fb6ecf83ff for this chassis. Dec 2 05:08:11 localhost ovn_controller[154505]: 2025-12-02T10:08:11Z|00298|binding|INFO|0983994a-c588-4cb2-8149-c4fb6ecf83ff: Claiming unknown Dec 2 05:08:11 localhost NetworkManager[5965]: [1764670091.7060] manager: (tap0983994a-c5): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Dec 2 05:08:11 localhost systemd-udevd[318002]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.706 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:11.716 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0983994a-c588-4cb2-8149-c4fb6ecf83ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:11.717 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0983994a-c588-4cb2-8149-c4fb6ecf83ff in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:11.718 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:11.718 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[da0d8e51-92c0-4efb-84a3-da4e5319150a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost ovn_controller[154505]: 2025-12-02T10:08:11Z|00299|binding|INFO|Setting lport 0983994a-c588-4cb2-8149-c4fb6ecf83ff ovn-installed in OVS Dec 2 05:08:11 localhost ovn_controller[154505]: 2025-12-02T10:08:11Z|00300|binding|INFO|Setting lport 0983994a-c588-4cb2-8149-c4fb6ecf83ff up in Southbound Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.735 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost journal[230136]: ethtool ioctl error on tap0983994a-c5: No such device Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.763 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:11 localhost nova_compute[281854]: 2025-12-02 10:08:11.777 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:12 localhost podman[318073]: Dec 2 05:08:12 localhost podman[318073]: 2025-12-02 10:08:12.540810153 +0000 UTC m=+0.101140767 container create d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:08:12 localhost systemd[1]: Started libpod-conmon-d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93.scope. Dec 2 05:08:12 localhost podman[318073]: 2025-12-02 10:08:12.491701394 +0000 UTC m=+0.052032028 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:12 localhost systemd[1]: Started libcrun container. Dec 2 05:08:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a61df44e86f8fb6336ebbdb7745d6d54fbfba671109717a6826455fb31cff4a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:12 localhost podman[318073]: 2025-12-02 10:08:12.618738709 +0000 UTC m=+0.179069323 container init d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:12 localhost podman[318073]: 2025-12-02 10:08:12.628319734 +0000 UTC m=+0.188650348 container start d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:12 localhost dnsmasq[318091]: started, version 2.85 cachesize 150 Dec 2 05:08:12 localhost dnsmasq[318091]: DNS service limited to local subnets Dec 2 05:08:12 localhost dnsmasq[318091]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:12 localhost dnsmasq[318091]: warning: no upstream servers configured Dec 2 05:08:12 localhost dnsmasq[318091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:12.691 263406 INFO neutron.agent.dhcp.agent [None req-99fbca4b-9d33-4ffb-9300-cf2db7cf92e6 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:11Z, description=, device_id=76289ed5-9a62-4040-9fb0-0eb25e7c4d83, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=064db39f-7363-4009-9530-71abc8becb56, ip_allocation=immediate, mac_address=fa:16:3e:00:89:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['4c7ab332-2efb-4efc-8a25-4f4854ae0d48'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:10Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=False, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1914, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:12Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:12 localhost nova_compute[281854]: 2025-12-02 10:08:12.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:12.848 263406 INFO neutron.agent.dhcp.agent [None req-2ea3aeb4-156e-46ce-8d73-5dd2543297a7 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:12 localhost dnsmasq[318091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:08:12 localhost podman[318111]: 2025-12-02 10:08:12.895102334 +0000 UTC m=+0.066110443 container kill d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:13.155 263406 INFO neutron.agent.dhcp.agent [None req-41bfd9cc-def8-4ac4-9c8c-85b850dbf444 - - - - - -] DHCP configuration for ports {'064db39f-7363-4009-9530-71abc8becb56'} is completed#033[00m Dec 2 05:08:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e141 e141: 6 total, 6 up, 6 in Dec 2 05:08:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:13.785 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:11Z, description=, device_id=76289ed5-9a62-4040-9fb0-0eb25e7c4d83, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=064db39f-7363-4009-9530-71abc8becb56, ip_allocation=immediate, mac_address=fa:16:3e:00:89:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['4c7ab332-2efb-4efc-8a25-4f4854ae0d48'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:10Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=False, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1914, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:12Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:13 localhost dnsmasq[318091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:08:13 localhost podman[318148]: 2025-12-02 10:08:13.985115802 +0000 UTC m=+0.058079429 container kill d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:14 localhost nova_compute[281854]: 2025-12-02 10:08:14.119 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:14.231 263406 INFO neutron.agent.dhcp.agent [None req-f8508c3a-5080-4c49-972e-c26eaa40eb87 - - - - - -] DHCP configuration for ports {'064db39f-7363-4009-9530-71abc8becb56'} is completed#033[00m Dec 2 05:08:14 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e142 e142: 6 total, 6 up, 6 in Dec 2 05:08:15 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:15.326 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e143 e143: 6 total, 6 up, 6 in Dec 2 05:08:15 localhost dnsmasq[318091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:15 localhost podman[318187]: 2025-12-02 10:08:15.663281103 +0000 UTC m=+0.054933424 container kill d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:08:15 localhost ovn_controller[154505]: 2025-12-02T10:08:15Z|00301|binding|INFO|Releasing lport 0983994a-c588-4cb2-8149-c4fb6ecf83ff from this chassis (sb_readonly=0) Dec 2 05:08:15 localhost kernel: device tap0983994a-c5 left promiscuous mode Dec 2 05:08:15 localhost ovn_controller[154505]: 2025-12-02T10:08:15Z|00302|binding|INFO|Setting lport 0983994a-c588-4cb2-8149-c4fb6ecf83ff down in Southbound Dec 2 05:08:15 localhost nova_compute[281854]: 2025-12-02 10:08:15.773 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:15.782 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0983994a-c588-4cb2-8149-c4fb6ecf83ff) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:15.783 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0983994a-c588-4cb2-8149-c4fb6ecf83ff in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:15.784 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:15.785 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[aab57ad9-17e2-4f5e-8a26-dea8614063df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:15 localhost nova_compute[281854]: 2025-12-02 10:08:15.794 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:15 localhost nova_compute[281854]: 2025-12-02 10:08:15.966 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:16 localhost nova_compute[281854]: 2025-12-02 10:08:16.043 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.107 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.108 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.113 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3c114c25-3520-46f7-b0f6-c4effd1a917e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.109122', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1c055fe-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': '872ee421c2948b6612cc7b9b5d76f1f6c5e4c692cbe85f4a6bc627ed6ac1fb52'}]}, 'timestamp': '2025-12-02 10:08:16.114531', '_unique_id': '007290f3619740349886423117b439d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.116 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.117 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.117 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4f0e3a9c-ade7-4e48-b6c8-256a2963388d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.117726', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1c0e848-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': 'b7cb9879149761e7c6f6dbd83c09e9effc27106f17484e888836a083855a080b'}]}, 'timestamp': '2025-12-02 10:08:16.118212', '_unique_id': '76820e30731f4c1eb07cadf6229b832e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.119 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.120 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.120 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fa2c9151-665b-4587-9ffe-e16525a255c0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.120368', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1c14ed2-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': '0ed12734316b4e54f4d9c391770ec703bf3dce185c1f57cbaeaba2a87ccc2edb'}]}, 'timestamp': '2025-12-02 10:08:16.120899', '_unique_id': 'b4540c0401e34bcca7ec5292a3a2b1c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.122 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.154 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.155 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2df317e6-82e6-40b4-9bd4-e2a163969967', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.123119', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1c686ea-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '67c9b77c9ab465829069f7d9b7e9f95bf5406e0565727c63b7801fdc7df626fa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.123119', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1c69f54-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '3dd082df1936f022ed46d4b84eb6a55c0553560695478d2fc6919990d2d5a4b9'}]}, 'timestamp': '2025-12-02 10:08:16.155685', '_unique_id': '16e73cc7923b4e5a961bc02a741f66aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.157 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.158 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.158 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20e0cea6-2297-4670-b4c4-d9e04c9f447b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.158565', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1c72726-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': '67f28ccbd27fd18f6957026e1e095b0c2bd3dc8f1cc45e3025d4336c1e98c6ea'}]}, 'timestamp': '2025-12-02 10:08:16.159146', '_unique_id': '290bdc9cf71740fc9ee188ca0b53853a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.160 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.161 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.161 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.161 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe210fcf-3096-4251-a44e-88c72c875125', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.161878', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1c7a48a-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': '6a44e9f50cdf9b198dcd1c2b07ce991af1f15db1d3dcd7ed64cdce0a94dfdcac'}]}, 'timestamp': '2025-12-02 10:08:16.162352', '_unique_id': 'bd5b37175ec949ca854f4a5dd64623b3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.163 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.164 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.165 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.181 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 17710000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e144 e144: 6 total, 6 up, 6 in Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e21cca1a-cb0b-441a-841a-cc5ddd3d867a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 17710000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:08:16.165137', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'd1caa946-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.400558434, 'message_signature': 'e778b8ee6248a992f03dee92266666be4e6f60b8988420fd134617bca04c167a'}]}, 'timestamp': '2025-12-02 10:08:16.182129', '_unique_id': 'b39b438da43f43bab93783ecdff508b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.185 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.186 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7df3eb41-37ab-49d6-a4bb-644de48f5ca1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.185225', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1cb3d02-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'c995f65fdcc63b6d54aefe4c3b7d130a16e6a2d80f4634fa061ebcb981297231'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.185225', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1cb5594-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '191929b07223547304475914de949a25c30b3a63bf6fde7e708c2e4745fb9c93'}]}, 'timestamp': '2025-12-02 10:08:16.186840', '_unique_id': '48922079dd1e4a1493aadb422722800b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.189 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.189 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.191 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b040727-d664-4ee3-ac64-a9a5a7aacd8d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.189916', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1cbec20-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '8b56d80ad1b95a38f4c05536807e4a2798c954ed40aadb722d1fc441028e6ffe'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.189916', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1cc4f94-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'c0242899bf2a5c33fcacc60331aa9efccd71b0b0aac0da0fe1ef967462c94d51'}]}, 'timestamp': '2025-12-02 10:08:16.192970', '_unique_id': '3df8292ff10340baac6630a4859bba70'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.194 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.195 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.196 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bbb0bf6e-33c4-4749-81a4-316de4f0b851', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.196082', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1ccdec8-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': '392a5837b917a4873fbea494ccdd60d9de6a68f183f1e8890ad228df46fb73b5'}]}, 'timestamp': '2025-12-02 10:08:16.196638', '_unique_id': 'eae9957e237c47bbb05c27881d892c0c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.197 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.199 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.199 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6a3ca0f-8619-49f2-a22b-4515ad9917f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.199424', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1cd6b2c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': 'bcedd6c17bc08c1b8b97c02b7793329fe810a85311414ea605712807b2975357'}]}, 'timestamp': '2025-12-02 10:08:16.200249', '_unique_id': '40ae01ef305b40508500ee0a49030d87'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.201 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.202 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '24944876-9702-45f6-9108-01f791839e13', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.203270', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1cdf57e-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': 'ec57849337a1c3996482f5766e2516a34f8c31363970d4e5276057e4cf40511a'}]}, 'timestamp': '2025-12-02 10:08:16.203811', '_unique_id': '1d2cc18b75784ff8821e46187b504b2d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.224 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e422db3-0416-41a7-8f00-168387fdcf57', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.206396', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d12398-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': '36d2e17ac303e4a51fd19e2177c6fbd0a5f57172ce5b6f159db0ea54f20f2213'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.206396', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d13f2c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': '158c7ef70c430283f9f7a65d59fccc0438f8d67068c092237a0684050eeb48dc'}]}, 'timestamp': '2025-12-02 10:08:16.225302', '_unique_id': '60ad47422ba34cab9c435fabd71b59a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.226 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.228 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.229 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.229 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7e78e534-fd57-4dcf-8833-de1025211a66', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.228975', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d1e60c-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': '7e8d1b22c58a78b51ee0c3b23baf8cb48633c9d89588262c071bbc3a76c0c720'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.228975', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d1fcbe-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': '7e8acb9b0a5b92ce670a87f3cf8d256ea4fd8c5ca5556cf4851afe0634e5bed3'}]}, 'timestamp': '2025-12-02 10:08:16.230220', '_unique_id': '6f7a932c30f14f06babde2ab7702c3ff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.233 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.233 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.233 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '464f39e1-bcd7-4dc4-a05c-f430c3f3f5ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.233372', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d29278-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '7b8907a5473ecd1f4d1f3ebe0acf31fa9cc67883b7a9bf97e9f0d8d46441cde2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.233372', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d2a754-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'e0620e33ff35dd0a768a3fa4525e917a772018f4f091105ff44bdf3db79b8ff8'}]}, 'timestamp': '2025-12-02 10:08:16.234503', '_unique_id': '5b71ff2d78ee4624bdf0399f1c0b5666'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.235 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.237 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.237 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.237 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.238 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3ce9e7ea-1dc0-489e-a4e1-1b9711576d6d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.237913', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d34268-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'd8136e16be7c7795d4ca12a1f51b69032027d0cd8a43537bea53bdb98d6cf033'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.237913', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d358a2-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'd211890d58976032f9d99252c6fadb2531ea05899109b7561e2c8862665b5db6'}]}, 'timestamp': '2025-12-02 10:08:16.239044', '_unique_id': '774b2e2c3dc64369a6d01858efad755c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.240 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.242 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.242 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b00f5ad1-0c82-4a0d-8e45-3b340b8b79b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:08:16.242865', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'd1d403f6-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.400558434, 'message_signature': '5cde8290fa6d9cf61111f9ed7aeb62380933ecf0cf8d5a31242cf588863ded6f'}]}, 'timestamp': '2025-12-02 10:08:16.243445', '_unique_id': 'f9673f0c2ad140a7a74e918ce31d75e2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.244 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.247 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.248 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.248 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '883d0d1c-df24-47da-bdaf-7a596b483244', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.248157', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d4d236-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': '5f02efe489e75a2cecc725f179b6e7c1f27fa41d0b6b9f3b492c322c9a8b0bb7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.248157', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d4ea50-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.342204059, 'message_signature': 'cf10c244920a8b7baa4e3e8fb30bc5373745879f4f8dbe1e340066cc51e37c4e'}]}, 'timestamp': '2025-12-02 10:08:16.249326', '_unique_id': '380325129a2e413dae5c94c3981a4106'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.250 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.252 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.252 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a116e725-e23c-4d4d-a670-025a3e833059', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.252278', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1d57358-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': 'd20a39640889707bc37c6a0e3acd8207efd36d37958431748aba247713b68d15'}]}, 'timestamp': '2025-12-02 10:08:16.252889', '_unique_id': 'f4df1069a99b40bfaf21349aa93263e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.255 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.255 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ce72a5fc-f57e-4c89-a288-40d4a432b938', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:08:16.255916', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'd1d60692-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.328207586, 'message_signature': 'a55ce317a367eec948f024cfdfa552fabd7d09ce1a986b26e51fef5fe7cfaa0a'}]}, 'timestamp': '2025-12-02 10:08:16.256824', '_unique_id': 'd4b82e05dfbb4aecb7126c48221f3fd1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.258 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.260 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.261 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.262 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6962c73d-d25b-49f0-8c9a-9e4c4c8e31b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:08:16.261265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd1d6d8ec-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': 'b4d9c814480a174896a0a931b89cf4f0b6ab7e4635fe8b6aa9d4ce57645c24a5'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:08:16.261265', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd1d6f642-cf66-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12258.425471808, 'message_signature': '0d089381456f866af7a6bfdeab4c436e296816e869d7ea645389c8428327ebb4'}]}, 'timestamp': '2025-12-02 10:08:16.262876', '_unique_id': '9ce931a33531495d82023e4e9e9a5c92'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:08:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:08:16.264 12 ERROR oslo_messaging.notify.messaging Dec 2 05:08:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:08:16 localhost dnsmasq[318091]: exiting on receipt of SIGTERM Dec 2 05:08:16 localhost podman[318226]: 2025-12-02 10:08:16.374680462 +0000 UTC m=+0.052673085 container kill d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:08:16 localhost systemd[1]: libpod-d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93.scope: Deactivated successfully. Dec 2 05:08:16 localhost systemd[1]: tmp-crun.mw3JIn.mount: Deactivated successfully. Dec 2 05:08:16 localhost podman[318238]: 2025-12-02 10:08:16.457416937 +0000 UTC m=+0.094153960 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:08:16 localhost podman[318238]: 2025-12-02 10:08:16.471993186 +0000 UTC m=+0.108730139 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:16 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:08:16 localhost podman[318245]: 2025-12-02 10:08:16.538724724 +0000 UTC m=+0.153672716 container died d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:16 localhost ovn_controller[154505]: 2025-12-02T10:08:16Z|00303|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:08:16 localhost podman[318245]: 2025-12-02 10:08:16.573989423 +0000 UTC m=+0.188937365 container cleanup d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:16 localhost systemd[1]: libpod-conmon-d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93.scope: Deactivated successfully. Dec 2 05:08:16 localhost nova_compute[281854]: 2025-12-02 10:08:16.603 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:16 localhost podman[318249]: 2025-12-02 10:08:16.611931425 +0000 UTC m=+0.220711223 container remove d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:16 localhost systemd[1]: var-lib-containers-storage-overlay-7a61df44e86f8fb6336ebbdb7745d6d54fbfba671109717a6826455fb31cff4a-merged.mount: Deactivated successfully. Dec 2 05:08:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d112284dad311a13fa2f730fc2c063b3f922917194c190cb70e5e683f47b5b93-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:16.878 263406 INFO neutron.agent.dhcp.agent [None req-cfe58590-dfd2-4f69-b66e-536d1e688d90 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:16 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e145 e145: 6 total, 6 up, 6 in Dec 2 05:08:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:18.687 263406 INFO neutron.agent.linux.ip_lib [None req-78f3ff14-c88f-46c0-bea2-bcb4c2dabfb5 - - - - - -] Device tap49697408-c0 cannot be used as it has no MAC address#033[00m Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.735 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost kernel: device tap49697408-c0 entered promiscuous mode Dec 2 05:08:18 localhost NetworkManager[5965]: [1764670098.7439] manager: (tap49697408-c0): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Dec 2 05:08:18 localhost ovn_controller[154505]: 2025-12-02T10:08:18Z|00304|binding|INFO|Claiming lport 49697408-c01c-4e89-b56b-aa2bd5d6b93d for this chassis. Dec 2 05:08:18 localhost ovn_controller[154505]: 2025-12-02T10:08:18Z|00305|binding|INFO|49697408-c01c-4e89-b56b-aa2bd5d6b93d: Claiming unknown Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.747 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost systemd-udevd[318297]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.753 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee9:26fd/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=49697408-c01c-4e89-b56b-aa2bd5d6b93d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.755 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 49697408-c01c-4e89-b56b-aa2bd5d6b93d in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.757 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c4648b8c-9385-4d50-be21-eac02960451b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.757 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.759 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a5705038-2821-4c11-8cfc-b3ec0055d50f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost ovn_controller[154505]: 2025-12-02T10:08:18Z|00306|binding|INFO|Setting lport 49697408-c01c-4e89-b56b-aa2bd5d6b93d ovn-installed in OVS Dec 2 05:08:18 localhost ovn_controller[154505]: 2025-12-02T10:08:18Z|00307|binding|INFO|Setting lport 49697408-c01c-4e89-b56b-aa2bd5d6b93d up in Southbound Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.789 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.794 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost journal[230136]: ethtool ioctl error on tap49697408-c0: No such device Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.832 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost nova_compute[281854]: 2025-12-02 10:08:18.871 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.903 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.905 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.906 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c4648b8c-9385-4d50-be21-eac02960451b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.906 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:18 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:18.907 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[69020618-7b52-4ebf-8edf-bf6396a7b88b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:19 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:19.296 2 INFO neutron.agent.securitygroups_rpc [None req-2bfb9ebd-1846-44bd-b2e4-1f309ec769c2 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:19 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:19.335 2 INFO neutron.agent.securitygroups_rpc [None req-a38d4309-d6ec-4127-b224-040aeb412100 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:19 localhost podman[318369]: Dec 2 05:08:19 localhost podman[318369]: 2025-12-02 10:08:19.732824574 +0000 UTC m=+0.093363539 container create 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:08:19 localhost systemd[1]: Started libpod-conmon-9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d.scope. Dec 2 05:08:19 localhost podman[318369]: 2025-12-02 10:08:19.688179884 +0000 UTC m=+0.048718879 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:19 localhost systemd[1]: tmp-crun.lNYFfz.mount: Deactivated successfully. Dec 2 05:08:19 localhost systemd[1]: Started libcrun container. Dec 2 05:08:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e491d8d4a71f860433ad283ec12af5d3298ac14eb5025a594468033bcfee7a83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:19 localhost podman[318369]: 2025-12-02 10:08:19.810383981 +0000 UTC m=+0.170922936 container init 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:19 localhost podman[318369]: 2025-12-02 10:08:19.823662654 +0000 UTC m=+0.184201659 container start 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:08:19 localhost dnsmasq[318387]: started, version 2.85 cachesize 150 Dec 2 05:08:19 localhost dnsmasq[318387]: DNS service limited to local subnets Dec 2 05:08:19 localhost dnsmasq[318387]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:19 localhost dnsmasq[318387]: warning: no upstream servers configured Dec 2 05:08:19 localhost dnsmasq[318387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:20.029 263406 INFO neutron.agent.dhcp.agent [None req-e4596e98-5517-4660-9df8-2ac1df4897cf - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:20.054 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:20 localhost dnsmasq[318387]: exiting on receipt of SIGTERM Dec 2 05:08:20 localhost podman[318406]: 2025-12-02 10:08:20.127995225 +0000 UTC m=+0.059653651 container kill 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:08:20 localhost systemd[1]: libpod-9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d.scope: Deactivated successfully. Dec 2 05:08:20 localhost podman[318419]: 2025-12-02 10:08:20.211657415 +0000 UTC m=+0.071519168 container died 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:08:20 localhost podman[318419]: 2025-12-02 10:08:20.336407519 +0000 UTC m=+0.196269232 container cleanup 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:20 localhost systemd[1]: libpod-conmon-9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d.scope: Deactivated successfully. Dec 2 05:08:20 localhost podman[318426]: 2025-12-02 10:08:20.363811199 +0000 UTC m=+0.206811933 container remove 9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:20.426 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:20 localhost nova_compute[281854]: 2025-12-02 10:08:20.543 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:20.646 2 INFO neutron.agent.securitygroups_rpc [None req-87212674-2d83-471b-8535-396909b240c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:20 localhost systemd[1]: var-lib-containers-storage-overlay-e491d8d4a71f860433ad283ec12af5d3298ac14eb5025a594468033bcfee7a83-merged.mount: Deactivated successfully. Dec 2 05:08:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e721da8ab0c62dd724218239feaad4b5755f53508e7fc0680508b8f2c2e4a6d-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:08:20 localhost podman[318448]: 2025-12-02 10:08:20.849991986 +0000 UTC m=+0.081304198 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:20 localhost podman[318448]: 2025-12-02 10:08:20.861137783 +0000 UTC m=+0.092450035 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:08:20 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:08:20 localhost nova_compute[281854]: 2025-12-02 10:08:20.970 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:21 localhost nova_compute[281854]: 2025-12-02 10:08:21.045 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e146 e146: 6 total, 6 up, 6 in Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.240178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101240254, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 910, "num_deletes": 258, "total_data_size": 1858186, "memory_usage": 1876584, "flush_reason": "Manual Compaction"} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101250183, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1224963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24600, "largest_seqno": 25505, "table_properties": {"data_size": 1220813, "index_size": 1877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10165, "raw_average_key_size": 21, "raw_value_size": 1212225, "raw_average_value_size": 2509, "num_data_blocks": 82, "num_entries": 483, "num_filter_entries": 483, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670061, "oldest_key_time": 1764670061, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10057 microseconds, and 4133 cpu microseconds. Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.250238) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1224963 bytes OK Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.250265) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.251993) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.252014) EVENT_LOG_v1 {"time_micros": 1764670101252007, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.252037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1853370, prev total WAL file size 1853370, number of live WAL files 2. Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.252707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1196KB)], [39(17MB)] Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101252758, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 19153783, "oldest_snapshot_seqno": -1} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12629 keys, 17217424 bytes, temperature: kUnknown Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101336648, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 17217424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17143550, "index_size": 41197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31621, "raw_key_size": 339304, "raw_average_key_size": 26, "raw_value_size": 16926449, "raw_average_value_size": 1340, "num_data_blocks": 1564, "num_entries": 12629, "num_filter_entries": 12629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.336930) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 17217424 bytes Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.339264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.1 rd, 205.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.1 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(29.7) write-amplify(14.1) OK, records in: 13161, records dropped: 532 output_compression: NoCompression Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.339293) EVENT_LOG_v1 {"time_micros": 1764670101339280, "job": 22, "event": "compaction_finished", "compaction_time_micros": 83979, "compaction_time_cpu_micros": 39139, "output_level": 6, "num_output_files": 1, "total_output_size": 17217424, "num_input_records": 13161, "num_output_records": 12629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101339584, "job": 22, "event": "table_file_deletion", "file_number": 41} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101342315, "job": 22, "event": "table_file_deletion", "file_number": 39} Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.252596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.342418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.342428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.342432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.342435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:21.342438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:21 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:21.764 2 INFO neutron.agent.securitygroups_rpc [None req-7754a6d4-074d-4d03-86b1-db3804b94ab5 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:21 localhost podman[318518]: Dec 2 05:08:21 localhost podman[318518]: 2025-12-02 10:08:21.849566842 +0000 UTC m=+0.097284292 container create 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:21 localhost systemd[1]: Started libpod-conmon-876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697.scope. Dec 2 05:08:21 localhost podman[318518]: 2025-12-02 10:08:21.797216168 +0000 UTC m=+0.044933658 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:21 localhost systemd[1]: Started libcrun container. Dec 2 05:08:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7994a80ca3ae2362c636511214aa588ff04075f3d1ffa63ed03ba32d12f47496/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:21 localhost podman[318518]: 2025-12-02 10:08:21.92075241 +0000 UTC m=+0.168469860 container init 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:08:21 localhost podman[318518]: 2025-12-02 10:08:21.927448668 +0000 UTC m=+0.175166138 container start 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:08:21 localhost dnsmasq[318536]: started, version 2.85 cachesize 150 Dec 2 05:08:21 localhost dnsmasq[318536]: DNS service limited to local subnets Dec 2 05:08:21 localhost dnsmasq[318536]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:21 localhost dnsmasq[318536]: warning: no upstream servers configured Dec 2 05:08:21 localhost dnsmasq-dhcp[318536]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:21 localhost dnsmasq[318536]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:08:21 localhost dnsmasq-dhcp[318536]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:21 localhost dnsmasq-dhcp[318536]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:22.149 263406 INFO neutron.agent.dhcp.agent [None req-ff1c3e3b-9900-498b-9ff3-5616c8b82e26 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '4309bb90-9cb8-4e1a-9a0b-f5834db52038', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:22 localhost dnsmasq[318536]: exiting on receipt of SIGTERM Dec 2 05:08:22 localhost podman[318554]: 2025-12-02 10:08:22.250475547 +0000 UTC m=+0.066795131 container kill 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:08:22 localhost systemd[1]: libpod-876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697.scope: Deactivated successfully. Dec 2 05:08:22 localhost podman[318569]: 2025-12-02 10:08:22.304893377 +0000 UTC m=+0.041628901 container died 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:22 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:22.308 2 INFO neutron.agent.securitygroups_rpc [None req-a959ed63-fb01-427b-9973-6a88ead4c1cf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']#033[00m Dec 2 05:08:22 localhost podman[318569]: 2025-12-02 10:08:22.333072028 +0000 UTC m=+0.069807512 container cleanup 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:22 localhost systemd[1]: libpod-conmon-876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697.scope: Deactivated successfully. Dec 2 05:08:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:22.352 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:22 localhost podman[318571]: 2025-12-02 10:08:22.395398018 +0000 UTC m=+0.124040726 container remove 876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:08:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:08:22 localhost podman[318572]: 2025-12-02 10:08:22.377681347 +0000 UTC m=+0.109283654 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, version=9.6, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container) Dec 2 05:08:22 localhost podman[318572]: 2025-12-02 10:08:22.46301074 +0000 UTC m=+0.194613057 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container) Dec 2 05:08:22 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:08:22 localhost podman[318615]: 2025-12-02 10:08:22.516399024 +0000 UTC m=+0.110153377 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:08:22 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e147 e147: 6 total, 6 up, 6 in Dec 2 05:08:22 localhost podman[318615]: 2025-12-02 10:08:22.55193999 +0000 UTC m=+0.145694323 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:08:22 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:08:22 localhost systemd[1]: tmp-crun.KcUnsg.mount: Deactivated successfully. Dec 2 05:08:22 localhost systemd[1]: var-lib-containers-storage-overlay-7994a80ca3ae2362c636511214aa588ff04075f3d1ffa63ed03ba32d12f47496-merged.mount: Deactivated successfully. Dec 2 05:08:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-876af8e678cf4d2a9e6f22ac20fc13cbbf440d5ca9116ca1735556ffd1696697-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:23 localhost podman[318692]: Dec 2 05:08:23 localhost podman[318692]: 2025-12-02 10:08:23.14330923 +0000 UTC m=+0.087797191 container create c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:08:23 localhost systemd[1]: Started libpod-conmon-c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9.scope. Dec 2 05:08:23 localhost podman[318692]: 2025-12-02 10:08:23.097194461 +0000 UTC m=+0.041682472 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:23 localhost systemd[1]: Started libcrun container. Dec 2 05:08:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfb9edd785f2c80f522c9fd9120eb8211cfcba13f635ce18efdb23634020145c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:23 localhost podman[318692]: 2025-12-02 10:08:23.212040162 +0000 UTC m=+0.156528123 container init c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:23 localhost podman[318692]: 2025-12-02 10:08:23.222897541 +0000 UTC m=+0.167385492 container start c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:08:23 localhost dnsmasq[318746]: started, version 2.85 cachesize 150 Dec 2 05:08:23 localhost dnsmasq[318746]: DNS service limited to local subnets Dec 2 05:08:23 localhost dnsmasq[318746]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:23 localhost dnsmasq[318746]: warning: no upstream servers configured Dec 2 05:08:23 localhost dnsmasq[318746]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e148 e148: 6 total, 6 up, 6 in Dec 2 05:08:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:23.591 263406 INFO neutron.agent.dhcp.agent [None req-0f16e2d2-ee30-44d4-b251-3be52e53bca7 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:23 localhost podman[318764]: 2025-12-02 10:08:23.623185858 +0000 UTC m=+0.083410424 container kill c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:08:23 localhost dnsmasq[318746]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:23.766 263406 INFO neutron.agent.dhcp.agent [None req-16ccd695-fef9-4c33-b634-fbb1bbce8395 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:19Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=4309bb90-9cb8-4e1a-9a0b-f5834db52038, ip_allocation=immediate, mac_address=fa:16:3e:a2:96:1d, name=tempest-NetworksTestDHCPv6-491044706, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=23, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['268258f7-a345-48a6-ab25-cc16b1ab921c', 'e054bad1-d3f9-4896-960d-8ef0f5ded92b'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:18Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1938, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:19Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:23.915 263406 INFO neutron.agent.dhcp.agent [None req-416f354e-2179-41a6-818b-00d3a0e5bdc0 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:23 localhost dnsmasq[318746]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:08:23 localhost podman[318831]: 2025-12-02 10:08:23.935126481 +0000 UTC m=+0.045634897 container kill c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:24 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:24.139 263406 INFO neutron.agent.dhcp.agent [None req-056135d1-cd40-4841-a70a-3a55c6f15857 - - - - - -] DHCP configuration for ports {'4309bb90-9cb8-4e1a-9a0b-f5834db52038'} is completed#033[00m Dec 2 05:08:24 localhost dnsmasq[318746]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:24 localhost podman[318869]: 2025-12-02 10:08:24.266219244 +0000 UTC m=+0.060348359 container kill c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:08:24 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:08:24 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:08:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e149 e149: 6 total, 6 up, 6 in Dec 2 05:08:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:24.638 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:24.640 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:24.643 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c4648b8c-9385-4d50-be21-eac02960451b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:24.643 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:24.644 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[762e3d9a-e5ee-46ac-b0ce-bee3713560f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:24 localhost nova_compute[281854]: 2025-12-02 10:08:24.770 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:24 localhost dnsmasq[318746]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:24 localhost podman[318924]: 2025-12-02 10:08:24.989748175 +0000 UTC m=+0.050215549 container kill c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:25.305 263406 INFO neutron.agent.dhcp.agent [None req-5bd2df99-404d-4dcc-818e-84ad504aa715 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e150 e150: 6 total, 6 up, 6 in Dec 2 05:08:25 localhost nova_compute[281854]: 2025-12-02 10:08:25.889 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:25 localhost nova_compute[281854]: 2025-12-02 10:08:25.972 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.009 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.010 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.010 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.036 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.047 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:26 localhost ovn_controller[154505]: 2025-12-02T10:08:26Z|00308|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:08:26 localhost nova_compute[281854]: 2025-12-02 10:08:26.231 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:26.352 2 INFO neutron.agent.securitygroups_rpc [None req-e2093ff2-f702-4cf4-8beb-c324b04696df b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:26 localhost dnsmasq[318746]: exiting on receipt of SIGTERM Dec 2 05:08:26 localhost systemd[1]: tmp-crun.mRLVU7.mount: Deactivated successfully. Dec 2 05:08:26 localhost podman[318963]: 2025-12-02 10:08:26.35731189 +0000 UTC m=+0.068955229 container kill c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:08:26 localhost systemd[1]: libpod-c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9.scope: Deactivated successfully. Dec 2 05:08:26 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:26.395 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:26 localhost podman[318976]: 2025-12-02 10:08:26.439396137 +0000 UTC m=+0.066340459 container died c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:08:26 localhost podman[318976]: 2025-12-02 10:08:26.468476912 +0000 UTC m=+0.095421194 container cleanup c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:26 localhost systemd[1]: libpod-conmon-c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9.scope: Deactivated successfully. Dec 2 05:08:26 localhost podman[318978]: 2025-12-02 10:08:26.523589631 +0000 UTC m=+0.143029503 container remove c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:08:27 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:27.076 2 INFO neutron.agent.securitygroups_rpc [None req-95108d82-ee5d-48ad-b799-8f24c524b687 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']#033[00m Dec 2 05:08:27 localhost systemd[1]: var-lib-containers-storage-overlay-dfb9edd785f2c80f522c9fd9120eb8211cfcba13f635ce18efdb23634020145c-merged.mount: Deactivated successfully. Dec 2 05:08:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0e86867b5c789de145da941b9b246fef8ccb5f770b4dbd26db4f1b6d38be5e9-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:27.538 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:27.540 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:27.542 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c4648b8c-9385-4d50-be21-eac02960451b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:27.543 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:27.544 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfecb4f-4508-498c-b10d-0adb4a3c93a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:08:28 localhost ovn_controller[154505]: 2025-12-02T10:08:28Z|00309|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:08:28 localhost nova_compute[281854]: 2025-12-02 10:08:28.475 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:29.180 2 INFO neutron.agent.securitygroups_rpc [None req-0c3b85c4-8ee4-4ede-a5c5-9e006eeb1903 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:29.361 2 INFO neutron.agent.securitygroups_rpc [None req-29a3155c-2369-431f-9930-578d28142354 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']#033[00m Dec 2 05:08:29 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:29.553 2 INFO neutron.agent.securitygroups_rpc [None req-4ad98beb-2033-4528-bdc9-387b15719003 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:29 localhost podman[319059]: Dec 2 05:08:29 localhost podman[319059]: 2025-12-02 10:08:29.562263988 +0000 UTC m=+0.071276161 container create 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:08:29 localhost systemd[1]: Started libpod-conmon-5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e.scope. Dec 2 05:08:29 localhost systemd[1]: tmp-crun.rYzOyq.mount: Deactivated successfully. Dec 2 05:08:29 localhost systemd[1]: Started libcrun container. Dec 2 05:08:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1d0402c7888f1d0cd7e68faad9130068442e8068353c15c6ac51afafeb7addf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:29 localhost podman[319059]: 2025-12-02 10:08:29.536028938 +0000 UTC m=+0.045041131 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:29 localhost podman[319059]: 2025-12-02 10:08:29.642725651 +0000 UTC m=+0.151737844 container init 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:08:29 localhost podman[319059]: 2025-12-02 10:08:29.652635176 +0000 UTC m=+0.161647359 container start 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:29 localhost dnsmasq[319079]: started, version 2.85 cachesize 150 Dec 2 05:08:29 localhost dnsmasq[319079]: DNS service limited to local subnets Dec 2 05:08:29 localhost dnsmasq[319079]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:29 localhost dnsmasq[319079]: warning: no upstream servers configured Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:29 localhost dnsmasq[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:29.729 263406 INFO neutron.agent.dhcp.agent [None req-7b5e6717-cea8-474d-9840-10b7d67b5359 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:28Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=5da4d313-4dc9-4016-98e1-1b58c370bb2e, ip_allocation=immediate, mac_address=fa:16:3e:98:19:1e, name=tempest-NetworksTestDHCPv6-1769082368, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=27, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['33e7892a-f1fb-4759-a941-d291759a7b26', '9c67749b-9d67-48ed-9334-d10de6566a63'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:25Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=1988, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:28Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:29.859 263406 INFO neutron.agent.dhcp.agent [None req-8a7b9314-95d0-4509-b132-7b11625f9741 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:29 localhost podman[319096]: 2025-12-02 10:08:29.95823756 +0000 UTC m=+0.052538001 container kill 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:08:29 localhost dnsmasq[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:29 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:30.199 263406 INFO neutron.agent.dhcp.agent [None req-99847e37-8f50-4fc8-8084-86ecf3626a83 - - - - - -] DHCP configuration for ports {'5da4d313-4dc9-4016-98e1-1b58c370bb2e'} is completed#033[00m Dec 2 05:08:30 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:30.605 2 INFO neutron.agent.securitygroups_rpc [None req-84026af9-2eee-4701-994f-c9f2d1b31806 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:30 localhost dnsmasq[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:30 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:30 localhost dnsmasq-dhcp[319079]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:30 localhost podman[319136]: 2025-12-02 10:08:30.91176704 +0000 UTC m=+0.064431358 container kill 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:08:30 localhost nova_compute[281854]: 2025-12-02 10:08:30.975 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:31 localhost podman[319150]: 2025-12-02 10:08:31.024669769 +0000 UTC m=+0.086735722 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:08:31 localhost podman[319150]: 2025-12-02 10:08:31.037914362 +0000 UTC m=+0.099980345 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd) Dec 2 05:08:31 localhost nova_compute[281854]: 2025-12-02 10:08:31.048 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:31 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:08:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e151 e151: 6 total, 6 up, 6 in Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.235333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111235414, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 252, "total_data_size": 408037, "memory_usage": 417464, "flush_reason": "Manual Compaction"} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111240461, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 267212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25510, "largest_seqno": 25975, "table_properties": {"data_size": 264600, "index_size": 659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7423, "raw_average_key_size": 21, "raw_value_size": 259020, "raw_average_value_size": 742, "num_data_blocks": 29, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670101, "oldest_key_time": 1764670101, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 5194 microseconds, and 2091 cpu microseconds. Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.240529) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 267212 bytes OK Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.240558) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.243584) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.243639) EVENT_LOG_v1 {"time_micros": 1764670111243602, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.243664) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 405071, prev total WAL file size 405071, number of live WAL files 2. Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end) Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(260KB)], [42(16MB)] Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111244339, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 17484636, "oldest_snapshot_seqno": -1} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12455 keys, 15368615 bytes, temperature: kUnknown Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111322786, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 15368615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15300531, "index_size": 35855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 336025, "raw_average_key_size": 26, "raw_value_size": 15090997, "raw_average_value_size": 1211, "num_data_blocks": 1340, "num_entries": 12455, "num_filter_entries": 12455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.323052) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 15368615 bytes Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.325033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.7 rd, 195.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(122.9) write-amplify(57.5) OK, records in: 12978, records dropped: 523 output_compression: NoCompression Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.325052) EVENT_LOG_v1 {"time_micros": 1764670111325044, "job": 24, "event": "compaction_finished", "compaction_time_micros": 78514, "compaction_time_cpu_micros": 30001, "output_level": 6, "num_output_files": 1, "total_output_size": 15368615, "num_input_records": 12978, "num_output_records": 12455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111325275, "job": 24, "event": "table_file_deletion", "file_number": 44} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111326942, "job": 24, "event": "table_file_deletion", "file_number": 42} Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.327051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.327060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.327063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.327073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:08:31.327077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:08:31 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:31.458 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:32 localhost dnsmasq[319079]: exiting on receipt of SIGTERM Dec 2 05:08:32 localhost systemd[1]: tmp-crun.qbL2xN.mount: Deactivated successfully. Dec 2 05:08:32 localhost systemd[1]: libpod-5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e.scope: Deactivated successfully. Dec 2 05:08:32 localhost podman[319195]: 2025-12-02 10:08:32.184296432 +0000 UTC m=+0.041385854 container kill 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:32 localhost podman[319210]: 2025-12-02 10:08:32.232593169 +0000 UTC m=+0.043462060 container died 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:08:32 localhost podman[319210]: 2025-12-02 10:08:32.256475045 +0000 UTC m=+0.067343896 container cleanup 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:08:32 localhost systemd[1]: libpod-conmon-5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e.scope: Deactivated successfully. Dec 2 05:08:32 localhost podman[319217]: 2025-12-02 10:08:32.304445134 +0000 UTC m=+0.105944584 container remove 5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:33 localhost podman[319290]: Dec 2 05:08:33 localhost podman[319290]: 2025-12-02 10:08:33.030064451 +0000 UTC m=+0.088756946 container create 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:08:33 localhost systemd[1]: Started libpod-conmon-1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7.scope. Dec 2 05:08:33 localhost systemd[1]: Started libcrun container. Dec 2 05:08:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35eb325a33a7c1b7be7aec223582707d0502075bf5da3d993e966b3ab11f7c3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:33 localhost podman[319290]: 2025-12-02 10:08:33.090219014 +0000 UTC m=+0.148911509 container init 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:33 localhost podman[319290]: 2025-12-02 10:08:32.993941799 +0000 UTC m=+0.052634354 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:33 localhost podman[319290]: 2025-12-02 10:08:33.099564053 +0000 UTC m=+0.158256548 container start 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:33 localhost dnsmasq[319309]: started, version 2.85 cachesize 150 Dec 2 05:08:33 localhost dnsmasq[319309]: DNS service limited to local subnets Dec 2 05:08:33 localhost dnsmasq[319309]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:33 localhost dnsmasq[319309]: warning: no upstream servers configured Dec 2 05:08:33 localhost dnsmasq-dhcp[319309]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:33 localhost dnsmasq[319309]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:33 localhost dnsmasq-dhcp[319309]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:33 localhost dnsmasq-dhcp[319309]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:33 localhost systemd[1]: var-lib-containers-storage-overlay-a1d0402c7888f1d0cd7e68faad9130068442e8068353c15c6ac51afafeb7addf-merged.mount: Deactivated successfully. Dec 2 05:08:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ae1a41050dae7bb980aa0480d9ed16c89529ad5be181bafdf6e1d27c696130e-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e152 e152: 6 total, 6 up, 6 in Dec 2 05:08:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:33.697 263406 INFO neutron.agent.dhcp.agent [None req-850fcb4a-fb66-4c3b-806c-e71b9bcd05e5 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '49697408-c01c-4e89-b56b-aa2bd5d6b93d'} is completed#033[00m Dec 2 05:08:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:33.974 2 INFO neutron.agent.securitygroups_rpc [None req-3fe80696-0018-4728-a361-06aaa88dce01 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:34.043 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:08:34 localhost systemd[1]: tmp-crun.FINuKz.mount: Deactivated successfully. Dec 2 05:08:34 localhost dnsmasq[319309]: exiting on receipt of SIGTERM Dec 2 05:08:34 localhost podman[319326]: 2025-12-02 10:08:34.047594798 +0000 UTC m=+0.067114760 container kill 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:08:34 localhost systemd[1]: libpod-1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7.scope: Deactivated successfully. Dec 2 05:08:34 localhost openstack_network_exporter[242845]: ERROR 10:08:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:08:34 localhost openstack_network_exporter[242845]: ERROR 10:08:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:34 localhost openstack_network_exporter[242845]: ERROR 10:08:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:08:34 localhost openstack_network_exporter[242845]: ERROR 10:08:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:08:34 localhost openstack_network_exporter[242845]: Dec 2 05:08:34 localhost openstack_network_exporter[242845]: ERROR 10:08:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:08:34 localhost openstack_network_exporter[242845]: Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.113 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.114 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.115 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:08:34 localhost nova_compute[281854]: 2025-12-02 10:08:34.115 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:34 localhost podman[319339]: 2025-12-02 10:08:34.131337559 +0000 UTC m=+0.062383874 container died 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:34 localhost podman[319339]: 2025-12-02 10:08:34.159259014 +0000 UTC m=+0.090305289 container cleanup 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:34 localhost systemd[1]: libpod-conmon-1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7.scope: Deactivated successfully. Dec 2 05:08:34 localhost systemd[1]: var-lib-containers-storage-overlay-35eb325a33a7c1b7be7aec223582707d0502075bf5da3d993e966b3ab11f7c3b-merged.mount: Deactivated successfully. Dec 2 05:08:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:34 localhost podman[319340]: 2025-12-02 10:08:34.199470885 +0000 UTC m=+0.129538323 container remove 1e64f9bdbc034317b0d5ea7907b398cbc0a4b016edcffa538253b356c83590e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:08:34 localhost kernel: device tap49697408-c0 left promiscuous mode Dec 2 05:08:34 localhost ovn_controller[154505]: 2025-12-02T10:08:34Z|00310|binding|INFO|Releasing lport 49697408-c01c-4e89-b56b-aa2bd5d6b93d from this chassis (sb_readonly=0) Dec 2 05:08:34 localhost ovn_controller[154505]: 2025-12-02T10:08:34Z|00311|binding|INFO|Setting lport 49697408-c01c-4e89-b56b-aa2bd5d6b93d down in Southbound Dec 2 05:08:34 localhost nova_compute[281854]: 2025-12-02 10:08:34.212 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.222 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fee9:26fd/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '8', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=49697408-c01c-4e89-b56b-aa2bd5d6b93d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.224 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 49697408-c01c-4e89-b56b-aa2bd5d6b93d in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.225 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:34.226 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9840e20c-dfb1-4962-9973-18a811e37a9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:34 localhost nova_compute[281854]: 2025-12-02 10:08:34.237 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:34 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:35.353 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:35.356 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:35.359 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:35.361 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ad613f61-a4a8-465b-a750-7e6efb649115]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:35 localhost nova_compute[281854]: 2025-12-02 10:08:35.978 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost podman[240799]: time="2025-12-02T10:08:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.049 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost podman[240799]: @ - - [02/Dec/2025:10:08:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:08:36 localhost podman[240799]: @ - - [02/Dec/2025:10:08:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18769 "" "Go-http-client/1.1" Dec 2 05:08:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:08:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:08:36 localhost podman[319369]: 2025-12-02 10:08:36.43844121 +0000 UTC m=+0.078315448 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:08:36 localhost podman[319368]: 2025-12-02 10:08:36.497322269 +0000 UTC m=+0.136497388 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:08:36 localhost podman[319369]: 2025-12-02 10:08:36.501991814 +0000 UTC m=+0.141866072 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:36 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:08:36 localhost podman[319368]: 2025-12-02 10:08:36.558299474 +0000 UTC m=+0.197474593 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:08:36 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:08:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:36.584 263406 INFO neutron.agent.linux.ip_lib [None req-00e52bdb-0d1f-451c-9cea-829624b56b9e - - - - - -] Device tapc64e3ca3-73 cannot be used as it has no MAC address#033[00m Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.608 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost kernel: device tapc64e3ca3-73 entered promiscuous mode Dec 2 05:08:36 localhost NetworkManager[5965]: [1764670116.6141] manager: (tapc64e3ca3-73): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Dec 2 05:08:36 localhost ovn_controller[154505]: 2025-12-02T10:08:36Z|00312|binding|INFO|Claiming lport c64e3ca3-73af-44f7-b152-4306718afd23 for this chassis. Dec 2 05:08:36 localhost ovn_controller[154505]: 2025-12-02T10:08:36Z|00313|binding|INFO|c64e3ca3-73af-44f7-b152-4306718afd23: Claiming unknown Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.616 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost systemd-udevd[319425]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.622 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:36.622 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c64e3ca3-73af-44f7-b152-4306718afd23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.623 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost ovn_controller[154505]: 2025-12-02T10:08:36Z|00314|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 ovn-installed in OVS Dec 2 05:08:36 localhost ovn_controller[154505]: 2025-12-02T10:08:36Z|00315|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 up in Southbound Dec 2 05:08:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:36.623 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c64e3ca3-73af-44f7-b152-4306718afd23 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.624 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:36.625 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2a8b203c-85f8-48d9-97e1-4bfb27c648a8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:36.625 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:36.626 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ed42ff10-faad-4a23-89f9-9b200cd87797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.642 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost journal[230136]: ethtool ioctl error on tapc64e3ca3-73: No such device Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.675 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:36 localhost nova_compute[281854]: 2025-12-02 10:08:36.701 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:37 localhost podman[319496]: Dec 2 05:08:37 localhost podman[319496]: 2025-12-02 10:08:37.711317452 +0000 UTC m=+0.100372737 container create 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:37 localhost podman[319496]: 2025-12-02 10:08:37.642350493 +0000 UTC m=+0.031405818 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:37 localhost systemd[1]: Started libpod-conmon-70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694.scope. Dec 2 05:08:37 localhost systemd[1]: Started libcrun container. Dec 2 05:08:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb0ea69bd56382c7b188d586d00a208525f507b947282f45f01187189ba7005/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:37 localhost podman[319496]: 2025-12-02 10:08:37.790898622 +0000 UTC m=+0.179953907 container init 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:37 localhost podman[319496]: 2025-12-02 10:08:37.80095274 +0000 UTC m=+0.190008015 container start 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:37 localhost dnsmasq[319514]: started, version 2.85 cachesize 150 Dec 2 05:08:37 localhost dnsmasq[319514]: DNS service limited to local subnets Dec 2 05:08:37 localhost dnsmasq[319514]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:37 localhost dnsmasq[319514]: warning: no upstream servers configured Dec 2 05:08:37 localhost dnsmasq-dhcp[319514]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:37 localhost dnsmasq[319514]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:37 localhost dnsmasq-dhcp[319514]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:37 localhost dnsmasq-dhcp[319514]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:37 localhost kernel: device tapc64e3ca3-73 left promiscuous mode Dec 2 05:08:37 localhost ovn_controller[154505]: 2025-12-02T10:08:37Z|00316|binding|INFO|Releasing lport c64e3ca3-73af-44f7-b152-4306718afd23 from this chassis (sb_readonly=0) Dec 2 05:08:37 localhost nova_compute[281854]: 2025-12-02 10:08:37.893 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:37 localhost ovn_controller[154505]: 2025-12-02T10:08:37Z|00317|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 down in Southbound Dec 2 05:08:37 localhost nova_compute[281854]: 2025-12-02 10:08:37.915 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.520 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8c:26f8/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c64e3ca3-73af-44f7-b152-4306718afd23) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.522 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c64e3ca3-73af-44f7-b152-4306718afd23 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.527 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.528 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[9a86aa40-2b9a-4f49-a734-15c54a2b8f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:38.601 263406 INFO neutron.agent.dhcp.agent [None req-d9651824-06dd-4759-9ad7-13c727cf3e17 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.651 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.653 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.656 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:38.656 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d72a6455-23fe-4c6b-82a8-1c794e1d2554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:40.314 263406 INFO neutron.agent.linux.ip_lib [None req-5a362f57-b7ef-4779-b7b2-5aa374c72b75 - - - - - -] Device tap8fbb99e9-2a cannot be used as it has no MAC address#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.339 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost kernel: device tap8fbb99e9-2a entered promiscuous mode Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.345 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost NetworkManager[5965]: [1764670120.3464] manager: (tap8fbb99e9-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00318|binding|INFO|Claiming lport 8fbb99e9-2ad3-4260-a17b-f7524696dad5 for this chassis. Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00319|binding|INFO|8fbb99e9-2ad3-4260-a17b-f7524696dad5: Claiming unknown Dec 2 05:08:40 localhost systemd-udevd[319536]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.361 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-8a8e4389-c9b3-4713-b533-7861fccbcf32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a8e4389-c9b3-4713-b533-7861fccbcf32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50b20ebe68c9494a933fabe997d62528', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc4549c7-0142-4249-a0f1-78307f272ad4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8fbb99e9-2ad3-4260-a17b-f7524696dad5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.364 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 8fbb99e9-2ad3-4260-a17b-f7524696dad5 in datapath 8a8e4389-c9b3-4713-b533-7861fccbcf32 bound to our chassis#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.366 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8a8e4389-c9b3-4713-b533-7861fccbcf32 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.371 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16b312-bc22-4129-a9cb-7f3c490e7fe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00320|binding|INFO|Setting lport 8fbb99e9-2ad3-4260-a17b-f7524696dad5 ovn-installed in OVS Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00321|binding|INFO|Setting lport 8fbb99e9-2ad3-4260-a17b-f7524696dad5 up in Southbound Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.374 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.376 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.390 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.432 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.462 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost dnsmasq[319514]: exiting on receipt of SIGTERM Dec 2 05:08:40 localhost podman[319554]: 2025-12-02 10:08:40.508246956 +0000 UTC m=+0.048331658 container kill 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:40 localhost systemd[1]: libpod-70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694.scope: Deactivated successfully. Dec 2 05:08:40 localhost podman[319570]: 2025-12-02 10:08:40.567244668 +0000 UTC m=+0.048173144 container died 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:40 localhost podman[319570]: 2025-12-02 10:08:40.602446747 +0000 UTC m=+0.083375203 container cleanup 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:40 localhost systemd[1]: libpod-conmon-70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694.scope: Deactivated successfully. Dec 2 05:08:40 localhost podman[319577]: 2025-12-02 10:08:40.629646151 +0000 UTC m=+0.100193070 container remove 70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:08:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:40.696 263406 INFO neutron.agent.linux.ip_lib [None req-e7e3c12b-d55a-492b-9f7f-2d08b4cb8d72 - - - - - -] Device tapc64e3ca3-73 cannot be used as it has no MAC address#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.714 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost kernel: device tapc64e3ca3-73 entered promiscuous mode Dec 2 05:08:40 localhost NetworkManager[5965]: [1764670120.7185] manager: (tapc64e3ca3-73): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Dec 2 05:08:40 localhost systemd-udevd[319538]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00322|binding|INFO|Claiming lport c64e3ca3-73af-44f7-b152-4306718afd23 for this chassis. Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00323|binding|INFO|c64e3ca3-73af-44f7-b152-4306718afd23: Claiming unknown Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.728 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8c:26f8/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c64e3ca3-73af-44f7-b152-4306718afd23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00324|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 ovn-installed in OVS Dec 2 05:08:40 localhost ovn_controller[154505]: 2025-12-02T10:08:40Z|00325|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 up in Southbound Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.730 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.731 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c64e3ca3-73af-44f7-b152-4306718afd23 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.735 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.734 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2a8b203c-85f8-48d9-97e1-4bfb27c648a8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.735 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:40.736 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ce688c7e-b69d-4269-865d-c4bb85f05954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.764 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.801 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.824 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:40 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:40.866 2 INFO neutron.agent.securitygroups_rpc [None req-33ec59f6-70cd-4828-b040-1367d796c3cf 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:40 localhost nova_compute[281854]: 2025-12-02 10:08:40.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:41 localhost nova_compute[281854]: 2025-12-02 10:08:41.051 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:41 localhost systemd[1]: var-lib-containers-storage-overlay-ebb0ea69bd56382c7b188d586d00a208525f507b947282f45f01187189ba7005-merged.mount: Deactivated successfully. Dec 2 05:08:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70338df7c6654c96d78e0278a6a2ea84b179f75ddef4eb5f00f98c5bcbd88694-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:41 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:41.223 2 INFO neutron.agent.securitygroups_rpc [None req-67c167f7-d811-43ca-8236-d9881acaf013 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 e153: 6 total, 6 up, 6 in Dec 2 05:08:41 localhost podman[319679]: Dec 2 05:08:41 localhost podman[319679]: 2025-12-02 10:08:41.420137608 +0000 UTC m=+0.107600619 container create de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:41 localhost systemd[1]: Started libpod-conmon-de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a.scope. Dec 2 05:08:41 localhost podman[319679]: 2025-12-02 10:08:41.371899351 +0000 UTC m=+0.059362402 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:41 localhost systemd[1]: Started libcrun container. Dec 2 05:08:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331fe4c85053fe605b3d7d394a4a65cbed0aeb4f2e3994530c0c6c0a05693b91/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:41 localhost podman[319679]: 2025-12-02 10:08:41.489119975 +0000 UTC m=+0.176582996 container init de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:08:41 localhost dnsmasq[319699]: started, version 2.85 cachesize 150 Dec 2 05:08:41 localhost dnsmasq[319699]: DNS service limited to local subnets Dec 2 05:08:41 localhost dnsmasq[319699]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:41 localhost dnsmasq[319699]: warning: no upstream servers configured Dec 2 05:08:41 localhost dnsmasq-dhcp[319699]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:41 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 0 addresses Dec 2 05:08:41 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:08:41 localhost podman[319679]: 2025-12-02 10:08:41.509124128 +0000 UTC m=+0.196587149 container start de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:08:41 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:08:41 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:41.528 2 INFO neutron.agent.securitygroups_rpc [None req-77ccc14a-3033-433b-916a-b05c2a4a2183 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:41.654 263406 INFO neutron.agent.dhcp.agent [None req-1f853948-0a75-41a1-afa8-3893d973fa67 - - - - - -] DHCP configuration for ports {'4934921f-c372-4513-ba83-48451840e960'} is completed#033[00m Dec 2 05:08:41 localhost podman[319720]: Dec 2 05:08:41 localhost podman[319720]: 2025-12-02 10:08:41.706227062 +0000 UTC m=+0.092100316 container create 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:41 localhost systemd[1]: Started libpod-conmon-7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067.scope. Dec 2 05:08:41 localhost systemd[1]: Started libcrun container. Dec 2 05:08:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83cf781f8dcd5525026416e9b72f7ccd15f8eef76d821a9886753d0d7e285a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:41 localhost podman[319720]: 2025-12-02 10:08:41.664585812 +0000 UTC m=+0.050459156 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:41 localhost podman[319720]: 2025-12-02 10:08:41.767493854 +0000 UTC m=+0.153367098 container init 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:41 localhost podman[319720]: 2025-12-02 10:08:41.779571566 +0000 UTC m=+0.165444820 container start 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:41 localhost dnsmasq[319739]: started, version 2.85 cachesize 150 Dec 2 05:08:41 localhost dnsmasq[319739]: DNS service limited to local subnets Dec 2 05:08:41 localhost dnsmasq[319739]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:41 localhost dnsmasq[319739]: warning: no upstream servers configured Dec 2 05:08:41 localhost dnsmasq-dhcp[319739]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:41 localhost dnsmasq[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:41 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:41 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:41.845 263406 INFO neutron.agent.dhcp.agent [None req-fb8bcf1a-2f41-4e5f-a9d1-80eab479a1e2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:40Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=7093b5e3-4af2-43eb-9bb8-fdb6491b81ff, ip_allocation=immediate, mac_address=fa:16:3e:b8:f6:0b, name=tempest-NetworksTestDHCPv6-610894895, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=31, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['0f1bf5f6-1ea6-475e-8b92-333c1acae145', '37da3fdc-3c05-4495-96b3-7d5c496a8839'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:35Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2045, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:40Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:42 localhost dnsmasq[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:08:42 localhost podman[319758]: 2025-12-02 10:08:42.070668984 +0000 UTC m=+0.047879877 container kill 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:08:42 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:42 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:42 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:42.501 2 INFO neutron.agent.securitygroups_rpc [None req-fd3bc9cf-ed5a-495f-beed-1c7d898feb8a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:42.595 263406 INFO neutron.agent.dhcp.agent [None req-7b0f0269-e6f2-4d89-bebb-aaf1e6def218 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', 'c64e3ca3-73af-44f7-b152-4306718afd23'} is completed#033[00m Dec 2 05:08:42 localhost dnsmasq[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:42 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:42 localhost dnsmasq-dhcp[319739]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:42 localhost podman[319796]: 2025-12-02 10:08:42.792772476 +0000 UTC m=+0.072344638 container kill 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:42 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:42.950 2 INFO neutron.agent.securitygroups_rpc [None req-98ebf0df-0324-4fc7-82f5-9efe0544203a 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:43.217 263406 INFO neutron.agent.dhcp.agent [None req-1a3d8fcc-b621-4cba-9f0b-cce73f1cef1a - - - - - -] DHCP configuration for ports {'7093b5e3-4af2-43eb-9bb8-fdb6491b81ff'} is completed#033[00m Dec 2 05:08:44 localhost podman[319835]: 2025-12-02 10:08:44.07255265 +0000 UTC m=+0.051980505 container kill 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:08:44 localhost dnsmasq[319739]: exiting on receipt of SIGTERM Dec 2 05:08:44 localhost systemd[1]: libpod-7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067.scope: Deactivated successfully. Dec 2 05:08:44 localhost podman[319850]: 2025-12-02 10:08:44.155362798 +0000 UTC m=+0.065010934 container died 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:08:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:44 localhost systemd[1]: var-lib-containers-storage-overlay-e83cf781f8dcd5525026416e9b72f7ccd15f8eef76d821a9886753d0d7e285a2-merged.mount: Deactivated successfully. Dec 2 05:08:44 localhost podman[319850]: 2025-12-02 10:08:44.194347277 +0000 UTC m=+0.103995353 container cleanup 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:44 localhost systemd[1]: libpod-conmon-7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067.scope: Deactivated successfully. Dec 2 05:08:44 localhost podman[319851]: 2025-12-02 10:08:44.223651087 +0000 UTC m=+0.127996611 container remove 7de61421e34b4879fbe4da74285d5a8f3b2a3f5f38f1c0a73170c12bd8ad0067 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:08:45 localhost podman[319930]: Dec 2 05:08:45 localhost podman[319930]: 2025-12-02 10:08:45.038900573 +0000 UTC m=+0.095383173 container create fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:45 localhost systemd[1]: Started libpod-conmon-fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9.scope. Dec 2 05:08:45 localhost podman[319930]: 2025-12-02 10:08:44.989802865 +0000 UTC m=+0.046285525 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:45 localhost systemd[1]: Started libcrun container. Dec 2 05:08:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40a1d2655a37158c8f6a41872170a0475aa15be61f8cbdc512f99b733d995a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:45 localhost podman[319930]: 2025-12-02 10:08:45.111694553 +0000 UTC m=+0.168177153 container init fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:45 localhost systemd[1]: tmp-crun.SF2vQN.mount: Deactivated successfully. Dec 2 05:08:45 localhost podman[319930]: 2025-12-02 10:08:45.123811977 +0000 UTC m=+0.180294577 container start fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:08:45 localhost dnsmasq[319948]: started, version 2.85 cachesize 150 Dec 2 05:08:45 localhost dnsmasq[319948]: DNS service limited to local subnets Dec 2 05:08:45 localhost dnsmasq[319948]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:45 localhost dnsmasq[319948]: warning: no upstream servers configured Dec 2 05:08:45 localhost dnsmasq[319948]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:45.539 263406 INFO neutron.agent.dhcp.agent [None req-84e17728-afdf-4a85-9c31-60e0bf442626 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', 'c64e3ca3-73af-44f7-b152-4306718afd23'} is completed#033[00m Dec 2 05:08:45 localhost dnsmasq[319948]: exiting on receipt of SIGTERM Dec 2 05:08:45 localhost podman[319967]: 2025-12-02 10:08:45.546038228 +0000 UTC m=+0.061366437 container kill fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:45 localhost systemd[1]: libpod-fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9.scope: Deactivated successfully. Dec 2 05:08:45 localhost podman[319979]: 2025-12-02 10:08:45.629394539 +0000 UTC m=+0.068060644 container died fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 05:08:45 localhost podman[319979]: 2025-12-02 10:08:45.662154793 +0000 UTC m=+0.100820848 container cleanup fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:45 localhost systemd[1]: libpod-conmon-fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9.scope: Deactivated successfully. Dec 2 05:08:45 localhost podman[319981]: 2025-12-02 10:08:45.698410779 +0000 UTC m=+0.130288954 container remove fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:08:45 localhost ovn_controller[154505]: 2025-12-02T10:08:45Z|00326|binding|INFO|Releasing lport c64e3ca3-73af-44f7-b152-4306718afd23 from this chassis (sb_readonly=0) Dec 2 05:08:45 localhost kernel: device tapc64e3ca3-73 left promiscuous mode Dec 2 05:08:45 localhost ovn_controller[154505]: 2025-12-02T10:08:45Z|00327|binding|INFO|Setting lport c64e3ca3-73af-44f7-b152-4306718afd23 down in Southbound Dec 2 05:08:45 localhost nova_compute[281854]: 2025-12-02 10:08:45.713 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:45.721 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe8c:26f8/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c64e3ca3-73af-44f7-b152-4306718afd23) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:45.723 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c64e3ca3-73af-44f7-b152-4306718afd23 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:45.726 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:45.727 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3bb8c3-028b-4aaf-8c2a-2c009019f87d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:45 localhost nova_compute[281854]: 2025-12-02 10:08:45.729 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:45 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:45.828 2 INFO neutron.agent.securitygroups_rpc [None req-a2676272-e4a7-4aab-af43-dd7cd656aeb3 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:45 localhost nova_compute[281854]: 2025-12-02 10:08:45.984 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:46 localhost nova_compute[281854]: 2025-12-02 10:08:46.053 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:46 localhost systemd[1]: var-lib-containers-storage-overlay-f40a1d2655a37158c8f6a41872170a0475aa15be61f8cbdc512f99b733d995a2-merged.mount: Deactivated successfully. Dec 2 05:08:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fab9dc8abfb3d0531868a0a937b82bfaa261f85a204c1ee9edcb58bbbe1c4dc9-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:46 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:08:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:47.353 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:47.355 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:47.356 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:47.357 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1ea5ed-684c-4096-b927-720470c4675a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:47 localhost podman[320010]: 2025-12-02 10:08:47.449789511 +0000 UTC m=+0.088874740 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 2 05:08:47 localhost podman[320010]: 2025-12-02 10:08:47.458881843 +0000 UTC m=+0.097967092 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:08:47 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:08:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:48.032 263406 INFO neutron.agent.linux.ip_lib [None req-7b86167a-ba06-4d0d-8e25-50da2a5af72a - - - - - -] Device tap4b41fcb7-46 cannot be used as it has no MAC address#033[00m Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.058 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost kernel: device tap4b41fcb7-46 entered promiscuous mode Dec 2 05:08:48 localhost NetworkManager[5965]: [1764670128.0679] manager: (tap4b41fcb7-46): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.068 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost ovn_controller[154505]: 2025-12-02T10:08:48Z|00328|binding|INFO|Claiming lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 for this chassis. Dec 2 05:08:48 localhost ovn_controller[154505]: 2025-12-02T10:08:48Z|00329|binding|INFO|4b41fcb7-4686-4ae0-bf20-f32d06645ac4: Claiming unknown Dec 2 05:08:48 localhost systemd-udevd[320039]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:48 localhost ovn_controller[154505]: 2025-12-02T10:08:48Z|00330|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 ovn-installed in OVS Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.080 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.099 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost journal[230136]: ethtool ioctl error on tap4b41fcb7-46: No such device Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.142 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost nova_compute[281854]: 2025-12-02 10:08:48.172 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:48 localhost ovn_controller[154505]: 2025-12-02T10:08:48Z|00331|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 up in Southbound Dec 2 05:08:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:48.543 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b41fcb7-4686-4ae0-bf20-f32d06645ac4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:48 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:48.543 2 INFO neutron.agent.securitygroups_rpc [None req-1e3a85d5-a4d4-4ac9-b4fb-7c32fb08bdf0 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:48.545 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:48.548 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 82df1984-655f-43b7-8e68-0cea428fb7f6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:48.549 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:48.550 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[986e5f78-c558-4b96-98c0-8389959c3678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:49.207 263406 INFO neutron.agent.linux.ip_lib [None req-81205318-602f-4d4a-b3cc-7cfb247118e4 - - - - - -] Device tap71712210-e7 cannot be used as it has no MAC address#033[00m Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.243 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost kernel: device tap71712210-e7 entered promiscuous mode Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00332|binding|INFO|Claiming lport 71712210-e77f-42b0-bbd5-d3267949cb4f for this chassis. Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00333|binding|INFO|71712210-e77f-42b0-bbd5-d3267949cb4f: Claiming unknown Dec 2 05:08:49 localhost systemd-udevd[320041]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:08:49 localhost NetworkManager[5965]: [1764670129.2511] manager: (tap71712210-e7): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.250 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.264 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-bfdb46d8-0ab9-4f91-af70-05b63804efe6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdb46d8-0ab9-4f91-af70-05b63804efe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7b847f-2e87-42d4-a87d-a72dff8a08d3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71712210-e77f-42b0-bbd5-d3267949cb4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.266 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 71712210-e77f-42b0-bbd5-d3267949cb4f in datapath bfdb46d8-0ab9-4f91-af70-05b63804efe6 bound to our chassis#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.268 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bfdb46d8-0ab9-4f91-af70-05b63804efe6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.269 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[7774b60e-6d90-4ced-a6e1-662c3689704b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00334|binding|INFO|Setting lport 71712210-e77f-42b0-bbd5-d3267949cb4f ovn-installed in OVS Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00335|binding|INFO|Setting lport 71712210-e77f-42b0-bbd5-d3267949cb4f up in Southbound Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.281 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.305 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.354 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.392 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost podman[320125]: Dec 2 05:08:49 localhost podman[320125]: 2025-12-02 10:08:49.46187248 +0000 UTC m=+0.102804151 container create d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:08:49 localhost systemd[1]: Started libpod-conmon-d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079.scope. Dec 2 05:08:49 localhost podman[320125]: 2025-12-02 10:08:49.416936192 +0000 UTC m=+0.057867903 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:49 localhost systemd[1]: Started libcrun container. Dec 2 05:08:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bb415e42759f888aad2ad6426896c066270642db5d3de3206e231e1a2e1442/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:49 localhost podman[320125]: 2025-12-02 10:08:49.555472575 +0000 UTC m=+0.196404246 container init d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:08:49 localhost podman[320125]: 2025-12-02 10:08:49.566157749 +0000 UTC m=+0.207089420 container start d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:08:49 localhost dnsmasq[320148]: started, version 2.85 cachesize 150 Dec 2 05:08:49 localhost dnsmasq[320148]: DNS service limited to local subnets Dec 2 05:08:49 localhost dnsmasq[320148]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:49 localhost dnsmasq[320148]: warning: no upstream servers configured Dec 2 05:08:49 localhost dnsmasq-dhcp[320148]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:49 localhost dnsmasq[320148]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:49 localhost dnsmasq-dhcp[320148]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:49 localhost dnsmasq-dhcp[320148]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00336|binding|INFO|Releasing lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 from this chassis (sb_readonly=0) Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.679 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost ovn_controller[154505]: 2025-12-02T10:08:49Z|00337|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 down in Southbound Dec 2 05:08:49 localhost kernel: device tap4b41fcb7-46 left promiscuous mode Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.707 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.866 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b41fcb7-4686-4ae0-bf20-f32d06645ac4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.868 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.871 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:49.872 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0b83dcd6-c644-45a6-b5f4-66aa6e602899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:49 localhost nova_compute[281854]: 2025-12-02 10:08:49.948 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:49 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:49.955 2 INFO neutron.agent.securitygroups_rpc [None req-59adf8f3-045e-4261-a367-eea8612462ef 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:50 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:50.074 263406 INFO neutron.agent.dhcp.agent [None req-88eeb533-b19d-4714-a306-9331fb97097e - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:50 localhost podman[320191]: Dec 2 05:08:50 localhost podman[320191]: 2025-12-02 10:08:50.572943969 +0000 UTC m=+0.102482382 container create 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:08:50 localhost systemd[1]: Started libpod-conmon-5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4.scope. Dec 2 05:08:50 localhost systemd[1]: tmp-crun.XFkaxx.mount: Deactivated successfully. Dec 2 05:08:50 localhost podman[320191]: 2025-12-02 10:08:50.529265255 +0000 UTC m=+0.058803688 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:50 localhost systemd[1]: Started libcrun container. Dec 2 05:08:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155e384317babd96cf2cd1a89ffbaad899eea093c053e3a7d2f3cc36440b4ff7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:50 localhost podman[320191]: 2025-12-02 10:08:50.644004472 +0000 UTC m=+0.173542855 container init 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:50 localhost podman[320191]: 2025-12-02 10:08:50.655233852 +0000 UTC m=+0.184772235 container start 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:08:50 localhost dnsmasq[320209]: started, version 2.85 cachesize 150 Dec 2 05:08:50 localhost dnsmasq[320209]: DNS service limited to local subnets Dec 2 05:08:50 localhost dnsmasq[320209]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:50 localhost dnsmasq[320209]: warning: no upstream servers configured Dec 2 05:08:50 localhost dnsmasq-dhcp[320209]: DHCP, static leases only on 10.101.0.0, lease time 1d Dec 2 05:08:50 localhost dnsmasq[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/addn_hosts - 0 addresses Dec 2 05:08:50 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/host Dec 2 05:08:50 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/opts Dec 2 05:08:50 localhost nova_compute[281854]: 2025-12-02 10:08:50.987 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:51.028 263406 INFO neutron.agent.dhcp.agent [None req-b2a2a912-7625-45d2-abe6-129620fcf6c5 - - - - - -] DHCP configuration for ports {'d5170ff3-2008-409b-ab16-990861d5c150'} is completed#033[00m Dec 2 05:08:51 localhost nova_compute[281854]: 2025-12-02 10:08:51.055 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:08:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:51.120 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:50Z, description=, device_id=aa61d6a1-1090-4f06-abf3-fa0ed7c99a0f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=42dd30a6-6487-4d6c-bb07-70b967b53b83, ip_allocation=immediate, mac_address=fa:16:3e:95:e1:4a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:08:36Z, description=, dns_domain=, id=8a8e4389-c9b3-4713-b533-7861fccbcf32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-306705527, port_security_enabled=True, project_id=50b20ebe68c9494a933fabe997d62528, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29831, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2021, status=ACTIVE, subnets=['bbb45bcf-ad23-4ed6-8ed8-1a191d2f154d'], tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:38Z, vlan_transparent=None, network_id=8a8e4389-c9b3-4713-b533-7861fccbcf32, port_security_enabled=False, project_id=50b20ebe68c9494a933fabe997d62528, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2077, status=DOWN, tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:50Z on network 8a8e4389-c9b3-4713-b533-7861fccbcf32#033[00m Dec 2 05:08:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:51 localhost podman[320210]: 2025-12-02 10:08:51.202057133 +0000 UTC m=+0.093792520 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:08:51 localhost podman[320210]: 2025-12-02 10:08:51.207111138 +0000 UTC m=+0.098846565 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 05:08:51 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:08:51 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 1 addresses Dec 2 05:08:51 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:08:51 localhost podman[320245]: 2025-12-02 10:08:51.38351852 +0000 UTC m=+0.072509214 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:08:51 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:08:51 localhost systemd[1]: tmp-crun.GbVTzC.mount: Deactivated successfully. Dec 2 05:08:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:51.457 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:51.460 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:51.463 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:51.464 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a38bd617-d7ba-4c26-a12c-e9ed0c35a33a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:51.699 263406 INFO neutron.agent.dhcp.agent [None req-6e436cec-7b51-462b-849b-c6ad4521531e - - - - - -] DHCP configuration for ports {'42dd30a6-6487-4d6c-bb07-70b967b53b83'} is completed#033[00m Dec 2 05:08:51 localhost dnsmasq[320148]: exiting on receipt of SIGTERM Dec 2 05:08:51 localhost podman[320282]: 2025-12-02 10:08:51.80907007 +0000 UTC m=+0.059953318 container kill d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:08:51 localhost systemd[1]: libpod-d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079.scope: Deactivated successfully. Dec 2 05:08:51 localhost podman[320295]: 2025-12-02 10:08:51.884302485 +0000 UTC m=+0.056530857 container died d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:08:51 localhost podman[320295]: 2025-12-02 10:08:51.922287707 +0000 UTC m=+0.094516069 container cleanup d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:51 localhost systemd[1]: libpod-conmon-d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079.scope: Deactivated successfully. Dec 2 05:08:51 localhost podman[320296]: 2025-12-02 10:08:51.966073634 +0000 UTC m=+0.133810547 container remove d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:08:52 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:52.022 263406 INFO neutron.agent.linux.ip_lib [None req-bcc5338e-a4f4-4633-a0fe-4889008fdbcf - - - - - -] Device tap4b41fcb7-46 cannot be used as it has no MAC address#033[00m Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.050 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost kernel: device tap4b41fcb7-46 entered promiscuous mode Dec 2 05:08:52 localhost ovn_controller[154505]: 2025-12-02T10:08:52Z|00338|binding|INFO|Claiming lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 for this chassis. Dec 2 05:08:52 localhost ovn_controller[154505]: 2025-12-02T10:08:52Z|00339|binding|INFO|4b41fcb7-4686-4ae0-bf20-f32d06645ac4: Claiming unknown Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.060 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost NetworkManager[5965]: [1764670132.0627] manager: (tap4b41fcb7-46): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Dec 2 05:08:52 localhost ovn_controller[154505]: 2025-12-02T10:08:52Z|00340|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 ovn-installed in OVS Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.070 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.075 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.100 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost systemd[1]: var-lib-containers-storage-overlay-b6bb415e42759f888aad2ad6426896c066270642db5d3de3206e231e1a2e1442-merged.mount: Deactivated successfully. Dec 2 05:08:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7a8a58c79152eb2cabe055c8c789290ff0dc544ecd130fbffe1094542360079-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.139 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost nova_compute[281854]: 2025-12-02 10:08:52.170 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:52 localhost ovn_controller[154505]: 2025-12-02T10:08:52Z|00341|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 up in Southbound Dec 2 05:08:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:52.465 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe45:c69d/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b41fcb7-4686-4ae0-bf20-f32d06645ac4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:52.468 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:08:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:52.470 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 82df1984-655f-43b7-8e68-0cea428fb7f6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:08:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:52.471 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:52.472 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2e76a852-4412-49a8-87cb-56dadfa2d5a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:08:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:08:52 localhost systemd[1]: tmp-crun.ienCYF.mount: Deactivated successfully. Dec 2 05:08:53 localhost podman[320377]: 2025-12-02 10:08:53.003085929 +0000 UTC m=+0.141034079 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:08:53 localhost podman[320378]: 2025-12-02 10:08:53.025829755 +0000 UTC m=+0.156781219 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:08:53 localhost podman[320378]: 2025-12-02 10:08:53.038061122 +0000 UTC m=+0.169012606 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:08:53 localhost podman[320377]: 2025-12-02 10:08:53.052039735 +0000 UTC m=+0.189987845 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, release=1755695350, distribution-scope=public, config_id=edpm, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Dec 2 05:08:53 localhost podman[320407]: Dec 2 05:08:53 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:08:53 localhost podman[320407]: 2025-12-02 10:08:53.067241689 +0000 UTC m=+0.111326678 container create 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:08:53 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:08:53 localhost systemd[1]: Started libpod-conmon-6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e.scope. Dec 2 05:08:53 localhost systemd[1]: Started libcrun container. Dec 2 05:08:53 localhost systemd[1]: tmp-crun.prSsMM.mount: Deactivated successfully. Dec 2 05:08:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3e505d8e880e23e61b81d3c4b057d6f4c504d64d83c246c85ab5ae5ccbaf0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:53 localhost podman[320407]: 2025-12-02 10:08:53.021387228 +0000 UTC m=+0.065472257 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:53 localhost podman[320407]: 2025-12-02 10:08:53.127055753 +0000 UTC m=+0.171140772 container init 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:08:53 localhost podman[320407]: 2025-12-02 10:08:53.13591845 +0000 UTC m=+0.180003449 container start 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:53 localhost dnsmasq[320446]: started, version 2.85 cachesize 150 Dec 2 05:08:53 localhost dnsmasq[320446]: DNS service limited to local subnets Dec 2 05:08:53 localhost dnsmasq[320446]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:53 localhost dnsmasq[320446]: warning: no upstream servers configured Dec 2 05:08:53 localhost dnsmasq-dhcp[320446]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:08:53 localhost dnsmasq-dhcp[320446]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:53 localhost dnsmasq[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:53 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:53 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:53.582 263406 INFO neutron.agent.dhcp.agent [None req-fa38b537-7ace-4b28-a4c8-008521b6d3e6 - - - - - -] DHCP configuration for ports {'4b41fcb7-4686-4ae0-bf20-f32d06645ac4', 'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:53.585 2 INFO neutron.agent.securitygroups_rpc [None req-e08d9635-b9e8-48c9-978b-72b4270a2462 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:54.023 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:53Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=9684ac42-ac06-402a-9d43-f3e1def9fb6d, ip_allocation=immediate, mac_address=fa:16:3e:e1:25:26, name=tempest-NetworksTestDHCPv6-840276124, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=35, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['bf84a78a-769c-4c8f-85c7-0080eaede2d7', 'eb9c3d44-1811-4f82-96ed-c16d4fac6b84'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:48Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2079, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:08:53Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:08:54 localhost dnsmasq[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:08:54 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:54 localhost podman[320463]: 2025-12-02 10:08:54.285050643 +0000 UTC m=+0.067633083 container kill 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:08:54 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:54.544 263406 INFO neutron.agent.dhcp.agent [None req-e3bb298c-0b95-4437-9ac9-de7cea9e1e2c - - - - - -] DHCP configuration for ports {'9684ac42-ac06-402a-9d43-f3e1def9fb6d'} is completed#033[00m Dec 2 05:08:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:54.611 2 INFO neutron.agent.securitygroups_rpc [None req-3c6b2ae8-8852-4b91-8b57-db72052e455d 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:54.745 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:50Z, description=, device_id=aa61d6a1-1090-4f06-abf3-fa0ed7c99a0f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=42dd30a6-6487-4d6c-bb07-70b967b53b83, ip_allocation=immediate, mac_address=fa:16:3e:95:e1:4a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:08:36Z, description=, dns_domain=, id=8a8e4389-c9b3-4713-b533-7861fccbcf32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-306705527, port_security_enabled=True, project_id=50b20ebe68c9494a933fabe997d62528, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29831, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2021, status=ACTIVE, subnets=['bbb45bcf-ad23-4ed6-8ed8-1a191d2f154d'], tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:38Z, vlan_transparent=None, network_id=8a8e4389-c9b3-4713-b533-7861fccbcf32, port_security_enabled=False, project_id=50b20ebe68c9494a933fabe997d62528, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2077, status=DOWN, tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:50Z on network 8a8e4389-c9b3-4713-b533-7861fccbcf32#033[00m Dec 2 05:08:54 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 1 addresses Dec 2 05:08:54 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:08:54 localhost podman[320503]: 2025-12-02 10:08:54.968929068 +0000 UTC m=+0.068472955 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:08:54 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:08:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:55.004 2 INFO neutron.agent.securitygroups_rpc [None req-c6b91e56-d8d3-49df-8fb0-b7b5f6e00308 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:08:55 localhost dnsmasq[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:55 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:55 localhost dnsmasq-dhcp[320446]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:55 localhost podman[320542]: 2025-12-02 10:08:55.256530222 +0000 UTC m=+0.061792187 container kill 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:55.266 263406 INFO neutron.agent.dhcp.agent [None req-03e14478-4c91-4d9f-8fe0-ab95191d78ec - - - - - -] DHCP configuration for ports {'42dd30a6-6487-4d6c-bb07-70b967b53b83'} is completed#033[00m Dec 2 05:08:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:55.871 2 INFO neutron.agent.securitygroups_rpc [None req-62ccdf95-04fd-49bc-8e08-6a4afcc10f44 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:55 localhost nova_compute[281854]: 2025-12-02 10:08:55.991 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:56 localhost nova_compute[281854]: 2025-12-02 10:08:56.057 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:08:56 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:56.645 2 INFO neutron.agent.securitygroups_rpc [None req-2779f3cb-e43d-46a6-b42c-1a159a69c67f b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:08:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:56.728 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:55Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dba2acb3-305f-411c-a151-68276b1c53d9, ip_allocation=immediate, mac_address=fa:16:3e:0a:30:cd, name=tempest-FloatingIPTestJSON-1300587785, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:08:36Z, description=, dns_domain=, id=8a8e4389-c9b3-4713-b533-7861fccbcf32, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPTestJSON-306705527, port_security_enabled=True, project_id=50b20ebe68c9494a933fabe997d62528, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29831, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2021, status=ACTIVE, subnets=['bbb45bcf-ad23-4ed6-8ed8-1a191d2f154d'], tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:38Z, vlan_transparent=None, network_id=8a8e4389-c9b3-4713-b533-7861fccbcf32, port_security_enabled=True, project_id=50b20ebe68c9494a933fabe997d62528, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0990385a-b99f-41bd-8d17-8e7fb5ec4794'], standard_attr_id=2084, status=DOWN, tags=[], tenant_id=50b20ebe68c9494a933fabe997d62528, updated_at=2025-12-02T10:08:56Z on network 8a8e4389-c9b3-4713-b533-7861fccbcf32#033[00m Dec 2 05:08:56 localhost dnsmasq[320446]: exiting on receipt of SIGTERM Dec 2 05:08:56 localhost podman[320582]: 2025-12-02 10:08:56.765973007 +0000 UTC m=+0.061638534 container kill 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:08:56 localhost systemd[1]: libpod-6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e.scope: Deactivated successfully. Dec 2 05:08:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:56.780 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:56Z, description=, device_id=484ade60-328f-43eb-b990-42dac6f1b75b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=38c2fef2-e6bd-4f3b-a3ea-c9d9c60fb17c, ip_allocation=immediate, mac_address=fa:16:3e:72:43:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:08:40Z, description=, dns_domain=, id=bfdb46d8-0ab9-4f91-af70-05b63804efe6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1196347497, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54074, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2042, status=ACTIVE, subnets=['d57b0d74-058f-4387-bcee-307aa4948e69'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:08:47Z, vlan_transparent=None, network_id=bfdb46d8-0ab9-4f91-af70-05b63804efe6, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2085, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:08:56Z on network bfdb46d8-0ab9-4f91-af70-05b63804efe6#033[00m Dec 2 05:08:56 localhost nova_compute[281854]: 2025-12-02 10:08:56.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:56 localhost nova_compute[281854]: 2025-12-02 10:08:56.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:08:56 localhost podman[320596]: 2025-12-02 10:08:56.845455985 +0000 UTC m=+0.060492533 container died 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:56 localhost systemd[1]: tmp-crun.4t63t4.mount: Deactivated successfully. Dec 2 05:08:56 localhost podman[320596]: 2025-12-02 10:08:56.889056547 +0000 UTC m=+0.104093055 container cleanup 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:08:56 localhost systemd[1]: libpod-conmon-6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e.scope: Deactivated successfully. Dec 2 05:08:56 localhost podman[320598]: 2025-12-02 10:08:56.986799012 +0000 UTC m=+0.193806926 container remove 6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:08:57 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 2 addresses Dec 2 05:08:57 localhost podman[320667]: 2025-12-02 10:08:57.029374816 +0000 UTC m=+0.058603932 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:08:57 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:08:57 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:08:57 localhost dnsmasq[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/addn_hosts - 1 addresses Dec 2 05:08:57 localhost podman[320655]: 2025-12-02 10:08:57.083205781 +0000 UTC m=+0.154929950 container kill 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:08:57 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/host Dec 2 05:08:57 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/opts Dec 2 05:08:57 localhost neutron_sriov_agent[256494]: 2025-12-02 10:08:57.460 2 INFO neutron.agent.securitygroups_rpc [None req-675bc92b-ef71-4946-a4ea-2f67c0d27bea 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:08:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:57.652 263406 INFO neutron.agent.dhcp.agent [None req-ce6e6a38-608b-4fc3-b43c-4926734febd5 - - - - - -] DHCP configuration for ports {'38c2fef2-e6bd-4f3b-a3ea-c9d9c60fb17c', 'dba2acb3-305f-411c-a151-68276b1c53d9'} is completed#033[00m Dec 2 05:08:57 localhost systemd[1]: var-lib-containers-storage-overlay-cd3e505d8e880e23e61b81d3c4b057d6f4c504d64d83c246c85ab5ae5ccbaf0f-merged.mount: Deactivated successfully. Dec 2 05:08:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6504725b85f817469802e2a87a2f5729880545cf3bfb62f37f8f51021ef2f40e-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:57 localhost nova_compute[281854]: 2025-12-02 10:08:57.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:08:57 localhost nova_compute[281854]: 2025-12-02 10:08:57.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:08:57 localhost nova_compute[281854]: 2025-12-02 10:08:57.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:08:57 localhost podman[320748]: Dec 2 05:08:57 localhost podman[320748]: 2025-12-02 10:08:57.977959275 +0000 UTC m=+0.094510490 container create 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:08:58 localhost systemd[1]: Started libpod-conmon-2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7.scope. Dec 2 05:08:58 localhost podman[320748]: 2025-12-02 10:08:57.933839669 +0000 UTC m=+0.050390934 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:08:58 localhost systemd[1]: Started libcrun container. Dec 2 05:08:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b7cb411eca1e18b4e08673cc037792aaf71615365dfe2ab0daee54678e70c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:08:58 localhost podman[320748]: 2025-12-02 10:08:58.050396996 +0000 UTC m=+0.166948211 container init 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:08:58 localhost podman[320748]: 2025-12-02 10:08:58.059015375 +0000 UTC m=+0.175566580 container start 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:08:58 localhost dnsmasq[320766]: started, version 2.85 cachesize 150 Dec 2 05:08:58 localhost dnsmasq[320766]: DNS service limited to local subnets Dec 2 05:08:58 localhost dnsmasq[320766]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:08:58 localhost dnsmasq[320766]: warning: no upstream servers configured Dec 2 05:08:58 localhost dnsmasq-dhcp[320766]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:08:58 localhost dnsmasq[320766]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:08:58 localhost dnsmasq-dhcp[320766]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:08:58 localhost dnsmasq-dhcp[320766]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:08:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:58.301 263406 INFO neutron.agent.dhcp.agent [None req-9099030d-387a-4288-bfa2-f16a30907ab3 - - - - - -] DHCP configuration for ports {'4b41fcb7-4686-4ae0-bf20-f32d06645ac4', 'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.316 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.316 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.317 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.317 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:08:58 localhost dnsmasq[320766]: exiting on receipt of SIGTERM Dec 2 05:08:58 localhost podman[320784]: 2025-12-02 10:08:58.413492262 +0000 UTC m=+0.057745810 container kill 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:08:58 localhost systemd[1]: libpod-2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7.scope: Deactivated successfully. Dec 2 05:08:58 localhost podman[320797]: 2025-12-02 10:08:58.488569452 +0000 UTC m=+0.060137783 container died 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:08:58 localhost podman[320797]: 2025-12-02 10:08:58.512731667 +0000 UTC m=+0.084299978 container cleanup 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:08:58 localhost systemd[1]: libpod-conmon-2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7.scope: Deactivated successfully. Dec 2 05:08:58 localhost podman[320798]: 2025-12-02 10:08:58.579691551 +0000 UTC m=+0.146727451 container remove 2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.596 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:58 localhost ovn_controller[154505]: 2025-12-02T10:08:58Z|00342|binding|INFO|Releasing lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 from this chassis (sb_readonly=0) Dec 2 05:08:58 localhost kernel: device tap4b41fcb7-46 left promiscuous mode Dec 2 05:08:58 localhost ovn_controller[154505]: 2025-12-02T10:08:58Z|00343|binding|INFO|Setting lport 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 down in Southbound Dec 2 05:08:58 localhost nova_compute[281854]: 2025-12-02 10:08:58.619 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:08:58 localhost systemd[1]: var-lib-containers-storage-overlay-997b7cb411eca1e18b4e08673cc037792aaf71615365dfe2ab0daee54678e70c-merged.mount: Deactivated successfully. Dec 2 05:08:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2856229d50cdf17a43cf6e248d49c30517c3b01edef9d1acc6add5935fc7bbb7-userdata-shm.mount: Deactivated successfully. Dec 2 05:08:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:58.856 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe45:c69d/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4b41fcb7-4686-4ae0-bf20-f32d06645ac4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:08:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:58.858 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 4b41fcb7-4686-4ae0-bf20-f32d06645ac4 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:08:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:58.860 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:08:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:08:58.861 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[41e176c6-5b0d-4037-9ca1-8557c6f1e527]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:08:59 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:08:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:08:59.066 263406 INFO neutron.agent.dhcp.agent [None req-e1e01852-a348-4194-911d-9b46de5787ba - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:00.263 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:56Z, description=, device_id=484ade60-328f-43eb-b990-42dac6f1b75b, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=38c2fef2-e6bd-4f3b-a3ea-c9d9c60fb17c, ip_allocation=immediate, mac_address=fa:16:3e:72:43:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:08:40Z, description=, dns_domain=, id=bfdb46d8-0ab9-4f91-af70-05b63804efe6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1196347497, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54074, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2042, status=ACTIVE, subnets=['d57b0d74-058f-4387-bcee-307aa4948e69'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:08:47Z, vlan_transparent=None, network_id=bfdb46d8-0ab9-4f91-af70-05b63804efe6, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2085, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:08:56Z on network bfdb46d8-0ab9-4f91-af70-05b63804efe6#033[00m Dec 2 05:09:00 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:00.327 2 INFO neutron.agent.securitygroups_rpc [None req-87a4be36-2a80-4d5d-bf3f-f65722b03fc3 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:00 localhost dnsmasq[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/addn_hosts - 1 addresses Dec 2 05:09:00 localhost podman[320845]: 2025-12-02 10:09:00.491751294 +0000 UTC m=+0.068461205 container kill 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:00 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/host Dec 2 05:09:00 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/opts Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.592 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:09:00 localhost podman[320874]: 2025-12-02 10:09:00.640338974 +0000 UTC m=+0.062287981 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:00 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 1 addresses Dec 2 05:09:00 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:09:00 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:09:00 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:00.770 2 INFO neutron.agent.securitygroups_rpc [None req-a8153026-65f9-484e-923d-c2362124502e 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.781 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.782 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.783 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.784 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.808 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.808 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.809 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.809 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.810 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:09:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:00.825 263406 INFO neutron.agent.dhcp.agent [None req-53507337-aade-4a1a-b9fe-2f12810efa76 - - - - - -] DHCP configuration for ports {'38c2fef2-e6bd-4f3b-a3ea-c9d9c60fb17c'} is completed#033[00m Dec 2 05:09:00 localhost nova_compute[281854]: 2025-12-02 10:09:00.994 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.059 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:09:01 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2476910833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.337 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:09:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.418 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.418 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:09:01 localhost systemd[1]: tmp-crun.HsriqY.mount: Deactivated successfully. Dec 2 05:09:01 localhost podman[320923]: 2025-12-02 10:09:01.440777835 +0000 UTC m=+0.078496013 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 2 05:09:01 localhost podman[320923]: 2025-12-02 10:09:01.457230073 +0000 UTC m=+0.094948241 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:09:01 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.654 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.658 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11236MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.658 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.659 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.735 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.736 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.736 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:09:01 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:01.840 2 INFO neutron.agent.securitygroups_rpc [None req-a5be6bf0-ce48-4e65-99a4-3416f730b3a2 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.909 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.939 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.939 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.955 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:09:01 localhost nova_compute[281854]: 2025-12-02 10:09:01.979 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.036 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:09:02 localhost dnsmasq[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/addn_hosts - 0 addresses Dec 2 05:09:02 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/host Dec 2 05:09:02 localhost dnsmasq-dhcp[319699]: read /var/lib/neutron/dhcp/8a8e4389-c9b3-4713-b533-7861fccbcf32/opts Dec 2 05:09:02 localhost podman[320970]: 2025-12-02 10:09:02.274078002 +0000 UTC m=+0.067592022 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:09:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2652316114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.618 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.627 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.647 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.650 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.651 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:02 localhost dnsmasq[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/addn_hosts - 0 addresses Dec 2 05:09:02 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/host Dec 2 05:09:02 localhost podman[321020]: 2025-12-02 10:09:02.760393761 +0000 UTC m=+0.048942884 container kill 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 05:09:02 localhost dnsmasq-dhcp[320209]: read /var/lib/neutron/dhcp/bfdb46d8-0ab9-4f91-af70-05b63804efe6/opts Dec 2 05:09:02 localhost ovn_controller[154505]: 2025-12-02T10:09:02Z|00344|binding|INFO|Releasing lport 8fbb99e9-2ad3-4260-a17b-f7524696dad5 from this chassis (sb_readonly=0) Dec 2 05:09:02 localhost kernel: device tap8fbb99e9-2a left promiscuous mode Dec 2 05:09:02 localhost ovn_controller[154505]: 2025-12-02T10:09:02Z|00345|binding|INFO|Setting lport 8fbb99e9-2ad3-4260-a17b-f7524696dad5 down in Southbound Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.951 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:02.964 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-8a8e4389-c9b3-4713-b533-7861fccbcf32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a8e4389-c9b3-4713-b533-7861fccbcf32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50b20ebe68c9494a933fabe997d62528', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc4549c7-0142-4249-a0f1-78307f272ad4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8fbb99e9-2ad3-4260-a17b-f7524696dad5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:02.965 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 8fbb99e9-2ad3-4260-a17b-f7524696dad5 in datapath 8a8e4389-c9b3-4713-b533-7861fccbcf32 unbound from our chassis#033[00m Dec 2 05:09:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:02.968 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a8e4389-c9b3-4713-b533-7861fccbcf32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:02.969 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[01641f36-ca34-4786-b532-ceaedd228637]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:02 localhost nova_compute[281854]: 2025-12-02 10:09:02.979 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.052 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.053 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.054 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:03 localhost ovn_controller[154505]: 2025-12-02T10:09:03Z|00346|binding|INFO|Releasing lport 71712210-e77f-42b0-bbd5-d3267949cb4f from this chassis (sb_readonly=0) Dec 2 05:09:03 localhost kernel: device tap71712210-e7 left promiscuous mode Dec 2 05:09:03 localhost ovn_controller[154505]: 2025-12-02T10:09:03Z|00347|binding|INFO|Setting lport 71712210-e77f-42b0-bbd5-d3267949cb4f down in Southbound Dec 2 05:09:03 localhost nova_compute[281854]: 2025-12-02 10:09:03.254 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.262 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-bfdb46d8-0ab9-4f91-af70-05b63804efe6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdb46d8-0ab9-4f91-af70-05b63804efe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf7b847f-2e87-42d4-a87d-a72dff8a08d3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71712210-e77f-42b0-bbd5-d3267949cb4f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.265 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 71712210-e77f-42b0-bbd5-d3267949cb4f in datapath bfdb46d8-0ab9-4f91-af70-05b63804efe6 unbound from our chassis#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.268 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfdb46d8-0ab9-4f91-af70-05b63804efe6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:03 localhost nova_compute[281854]: 2025-12-02 10:09:03.270 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.269 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[389b220f-2e5f-4732-9dfa-a281254f5bd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:03 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:03.315 2 INFO neutron.agent.securitygroups_rpc [None req-11b93b80-ea7f-4df4-8312-05f04742e794 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.411 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.414 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.417 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:03.418 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[22f0c3b1-ece1-4a72-a75d-832d0b0d7c87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:04 localhost openstack_network_exporter[242845]: ERROR 10:09:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:09:04 localhost openstack_network_exporter[242845]: ERROR 10:09:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:04 localhost openstack_network_exporter[242845]: ERROR 10:09:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:04 localhost openstack_network_exporter[242845]: ERROR 10:09:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:09:04 localhost openstack_network_exporter[242845]: Dec 2 05:09:04 localhost openstack_network_exporter[242845]: ERROR 10:09:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:09:04 localhost openstack_network_exporter[242845]: Dec 2 05:09:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:04.796 263406 INFO neutron.agent.linux.ip_lib [None req-f9d77031-8ceb-4333-912e-047cf12142b3 - - - - - -] Device tap0ada2217-2a cannot be used as it has no MAC address#033[00m Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.868 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost kernel: device tap0ada2217-2a entered promiscuous mode Dec 2 05:09:04 localhost NetworkManager[5965]: [1764670144.8758] manager: (tap0ada2217-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Dec 2 05:09:04 localhost ovn_controller[154505]: 2025-12-02T10:09:04Z|00348|binding|INFO|Claiming lport 0ada2217-2a5d-48ec-b3e1-f4be95cae804 for this chassis. Dec 2 05:09:04 localhost ovn_controller[154505]: 2025-12-02T10:09:04Z|00349|binding|INFO|0ada2217-2a5d-48ec-b3e1-f4be95cae804: Claiming unknown Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.876 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost systemd-udevd[321054]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:04.885 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8a:ca01/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ada2217-2a5d-48ec-b3e1-f4be95cae804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:04.887 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0ada2217-2a5d-48ec-b3e1-f4be95cae804 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:04 localhost ovn_controller[154505]: 2025-12-02T10:09:04Z|00350|binding|INFO|Setting lport 0ada2217-2a5d-48ec-b3e1-f4be95cae804 ovn-installed in OVS Dec 2 05:09:04 localhost ovn_controller[154505]: 2025-12-02T10:09:04Z|00351|binding|INFO|Setting lport 0ada2217-2a5d-48ec-b3e1-f4be95cae804 up in Southbound Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.890 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:04.889 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7de86c17-dca9-4795-a188-896ecb54fd0e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:04.889 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:04 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:04.890 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[3374b84e-9e6c-4155-b6eb-2b066324d524]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.893 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.914 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.951 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:04 localhost nova_compute[281854]: 2025-12-02 10:09:04.975 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:05.573 2 INFO neutron.agent.securitygroups_rpc [None req-d150629a-bcce-4a38-b00c-70964b564cd8 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:05 localhost podman[321109]: Dec 2 05:09:05 localhost podman[321109]: 2025-12-02 10:09:05.75590582 +0000 UTC m=+0.093010410 container create 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:05 localhost systemd[1]: Started libpod-conmon-4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1.scope. Dec 2 05:09:05 localhost podman[321109]: 2025-12-02 10:09:05.710432858 +0000 UTC m=+0.047537498 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:05 localhost systemd[1]: tmp-crun.ldCNSH.mount: Deactivated successfully. Dec 2 05:09:05 localhost systemd[1]: Started libcrun container. Dec 2 05:09:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33de825e2a2dd72ee39ef97e7e351a6db15d33a2072c70233b921fc2059cfb73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:05 localhost podman[321109]: 2025-12-02 10:09:05.839284221 +0000 UTC m=+0.176388811 container init 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:09:05 localhost podman[321109]: 2025-12-02 10:09:05.849405291 +0000 UTC m=+0.186509881 container start 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:09:05 localhost dnsmasq[321128]: started, version 2.85 cachesize 150 Dec 2 05:09:05 localhost dnsmasq[321128]: DNS service limited to local subnets Dec 2 05:09:05 localhost dnsmasq[321128]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:05 localhost dnsmasq[321128]: warning: no upstream servers configured Dec 2 05:09:05 localhost dnsmasq[321128]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:05.917 263406 INFO neutron.agent.dhcp.agent [None req-f9d77031-8ceb-4333-912e-047cf12142b3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:05Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=61687ac2-e5f9-4379-b916-cbc20f7dcee8, ip_allocation=immediate, mac_address=fa:16:3e:03:4b:72, name=tempest-NetworksTestDHCPv6-1045327991, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=38, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['0277418c-2cf9-4a26-87d2-695322913a68'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:01Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2116, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:05Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:05 localhost nova_compute[281854]: 2025-12-02 10:09:05.997 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:06.021 263406 INFO neutron.agent.dhcp.agent [None req-bccb8f61-58ed-46ff-9d03-86ac2dc2c06c - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:06 localhost podman[240799]: time="2025-12-02T10:09:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:09:06 localhost nova_compute[281854]: 2025-12-02 10:09:06.060 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:06 localhost podman[240799]: @ - - [02/Dec/2025:10:09:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159650 "" "Go-http-client/1.1" Dec 2 05:09:06 localhost podman[240799]: @ - - [02/Dec/2025:10:09:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20179 "" "Go-http-client/1.1" Dec 2 05:09:06 localhost dnsmasq[321128]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:09:06 localhost podman[321148]: 2025-12-02 10:09:06.155875358 +0000 UTC m=+0.119681739 container kill 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:09:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:06.375 263406 INFO neutron.agent.dhcp.agent [None req-78a43ff1-e309-4f20-af31-b4cb911dcd32 - - - - - -] DHCP configuration for ports {'61687ac2-e5f9-4379-b916-cbc20f7dcee8'} is completed#033[00m Dec 2 05:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:09:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:09:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:06.685 2 INFO neutron.agent.securitygroups_rpc [None req-eff2d2a9-9509-4ec3-933e-196163edb064 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:06 localhost podman[321168]: 2025-12-02 10:09:06.691430801 +0000 UTC m=+0.082530331 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:09:06 localhost nova_compute[281854]: 2025-12-02 10:09:06.696 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:06 localhost nova_compute[281854]: 2025-12-02 10:09:06.697 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:06 localhost nova_compute[281854]: 2025-12-02 10:09:06.697 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:06 localhost nova_compute[281854]: 2025-12-02 10:09:06.697 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:06 localhost podman[321168]: 2025-12-02 10:09:06.701327324 +0000 UTC m=+0.092426904 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:09:06 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:09:06 localhost systemd[1]: tmp-crun.Q645Sp.mount: Deactivated successfully. Dec 2 05:09:06 localhost podman[321169]: 2025-12-02 10:09:06.806810466 +0000 UTC m=+0.194889176 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:09:06 localhost podman[321169]: 2025-12-02 10:09:06.852236166 +0000 UTC m=+0.240314936 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:06 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:09:07 localhost dnsmasq[321128]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:07 localhost podman[321247]: 2025-12-02 10:09:07.017712146 +0000 UTC m=+0.072269778 container kill 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:07 localhost dnsmasq[319699]: exiting on receipt of SIGTERM Dec 2 05:09:07 localhost podman[321259]: 2025-12-02 10:09:07.062101209 +0000 UTC m=+0.067563282 container kill de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:09:07 localhost systemd[1]: libpod-de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a.scope: Deactivated successfully. Dec 2 05:09:07 localhost podman[321275]: 2025-12-02 10:09:07.143741164 +0000 UTC m=+0.064784147 container died de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:07 localhost podman[321275]: 2025-12-02 10:09:07.175083819 +0000 UTC m=+0.096126752 container cleanup de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:07 localhost systemd[1]: libpod-conmon-de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a.scope: Deactivated successfully. Dec 2 05:09:07 localhost podman[321278]: 2025-12-02 10:09:07.22387264 +0000 UTC m=+0.133844948 container remove de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a8e4389-c9b3-4713-b533-7861fccbcf32, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:09:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e154 e154: 6 total, 6 up, 6 in Dec 2 05:09:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:07.608 263406 INFO neutron.agent.dhcp.agent [None req-06f5a695-d526-4455-b095-d150da54f892 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:07.610 263406 INFO neutron.agent.dhcp.agent [None req-06f5a695-d526-4455-b095-d150da54f892 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:07 localhost systemd[1]: tmp-crun.fNHQi2.mount: Deactivated successfully. Dec 2 05:09:07 localhost systemd[1]: var-lib-containers-storage-overlay-331fe4c85053fe605b3d7d394a4a65cbed0aeb4f2e3994530c0c6c0a05693b91-merged.mount: Deactivated successfully. Dec 2 05:09:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de55849673dfacd1af528438a53ae08fd1579c7901e15ab043688d7244446c4a-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:07 localhost systemd[1]: run-netns-qdhcp\x2d8a8e4389\x2dc9b3\x2d4713\x2db533\x2d7861fccbcf32.mount: Deactivated successfully. Dec 2 05:09:07 localhost systemd[1]: tmp-crun.uEbLrP.mount: Deactivated successfully. Dec 2 05:09:07 localhost dnsmasq[321128]: exiting on receipt of SIGTERM Dec 2 05:09:07 localhost podman[321339]: 2025-12-02 10:09:07.942037918 +0000 UTC m=+0.081052501 container kill 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:07 localhost systemd[1]: libpod-4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1.scope: Deactivated successfully. Dec 2 05:09:07 localhost dnsmasq[320209]: exiting on receipt of SIGTERM Dec 2 05:09:07 localhost podman[321355]: 2025-12-02 10:09:07.988226348 +0000 UTC m=+0.080881935 container kill 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:07 localhost systemd[1]: libpod-5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4.scope: Deactivated successfully. Dec 2 05:09:08 localhost podman[321366]: 2025-12-02 10:09:08.003066094 +0000 UTC m=+0.052037418 container died 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:08 localhost podman[321366]: 2025-12-02 10:09:08.02880309 +0000 UTC m=+0.077774374 container cleanup 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:09:08 localhost systemd[1]: libpod-conmon-4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1.scope: Deactivated successfully. Dec 2 05:09:08 localhost podman[321394]: 2025-12-02 10:09:08.047764665 +0000 UTC m=+0.046580653 container died 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:08 localhost podman[321394]: 2025-12-02 10:09:08.078825123 +0000 UTC m=+0.077641060 container cleanup 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:08 localhost systemd[1]: libpod-conmon-5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4.scope: Deactivated successfully. Dec 2 05:09:08 localhost podman[321395]: 2025-12-02 10:09:08.118041848 +0000 UTC m=+0.106947591 container remove 5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bfdb46d8-0ab9-4f91-af70-05b63804efe6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:08.143 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:08 localhost podman[321373]: 2025-12-02 10:09:08.168828732 +0000 UTC m=+0.208974171 container remove 4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:09:08 localhost kernel: device tap0ada2217-2a left promiscuous mode Dec 2 05:09:08 localhost ovn_controller[154505]: 2025-12-02T10:09:08Z|00352|binding|INFO|Releasing lport 0ada2217-2a5d-48ec-b3e1-f4be95cae804 from this chassis (sb_readonly=0) Dec 2 05:09:08 localhost ovn_controller[154505]: 2025-12-02T10:09:08Z|00353|binding|INFO|Setting lport 0ada2217-2a5d-48ec-b3e1-f4be95cae804 down in Southbound Dec 2 05:09:08 localhost nova_compute[281854]: 2025-12-02 10:09:08.179 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:08 localhost nova_compute[281854]: 2025-12-02 10:09:08.194 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:08.205 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8a:ca01/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0ada2217-2a5d-48ec-b3e1-f4be95cae804) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:08.208 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0ada2217-2a5d-48ec-b3e1-f4be95cae804 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:08.210 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:08.211 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7fe0da-4da0-4540-be49-dda083b82171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:08.532 263406 INFO neutron.agent.dhcp.agent [None req-df363115-4f09-4027-892e-51432bc04e70 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:08.611 263406 INFO neutron.agent.dhcp.agent [None req-4ddd1669-849c-45dd-8c52-4ff6be9ca7d8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:08 localhost systemd[1]: tmp-crun.eTW9y4.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: var-lib-containers-storage-overlay-33de825e2a2dd72ee39ef97e7e351a6db15d33a2072c70233b921fc2059cfb73-merged.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e7fd5096d7458dce095a73f3287888eeeb8d0c8b269f446beec866463c3edc1-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: var-lib-containers-storage-overlay-155e384317babd96cf2cd1a89ffbaad899eea093c053e3a7d2f3cc36440b4ff7-merged.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5632184965fd7091ff7705ef40f369861a07fa387c60aba99123c60c1d5aa8e4-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:08 localhost systemd[1]: run-netns-qdhcp\x2dbfdb46d8\x2d0ab9\x2d4f91\x2daf70\x2d05b63804efe6.mount: Deactivated successfully. Dec 2 05:09:08 localhost ovn_controller[154505]: 2025-12-02T10:09:08Z|00354|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:09:08 localhost nova_compute[281854]: 2025-12-02 10:09:08.962 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:10.017 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:10.999 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.062 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:11 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:11.262 2 INFO neutron.agent.securitygroups_rpc [None req-d69c9fb8-aada-452c-807d-ffbf23ad4dde b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:11.292 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:11 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:11.330 2 INFO neutron.agent.securitygroups_rpc [None req-158ba6e6-ae47-4633-afb7-8fe1fff090db 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:11.528 263406 INFO neutron.agent.linux.ip_lib [None req-9b4a9558-8b26-4cd9-bc17-e585f6ba3834 - - - - - -] Device tapb4439fe1-b6 cannot be used as it has no MAC address#033[00m Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.559 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost kernel: device tapb4439fe1-b6 entered promiscuous mode Dec 2 05:09:11 localhost NetworkManager[5965]: [1764670151.5687] manager: (tapb4439fe1-b6): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Dec 2 05:09:11 localhost ovn_controller[154505]: 2025-12-02T10:09:11Z|00355|binding|INFO|Claiming lport b4439fe1-b6e1-4982-a031-265c40bf42ca for this chassis. Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.569 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost ovn_controller[154505]: 2025-12-02T10:09:11Z|00356|binding|INFO|b4439fe1-b6e1-4982-a031-265c40bf42ca: Claiming unknown Dec 2 05:09:11 localhost systemd-udevd[321439]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:11.583 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:8fc3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4439fe1-b6e1-4982-a031-265c40bf42ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:11.586 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b4439fe1-b6e1-4982-a031-265c40bf42ca in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:11.589 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c0c7af0d-4fcb-4556-861c-4af2f37ea5e4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:11.590 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:11.592 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[82949cf7-1133-4bfd-b12c-4572a073dd90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.603 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost ovn_controller[154505]: 2025-12-02T10:09:11Z|00357|binding|INFO|Setting lport b4439fe1-b6e1-4982-a031-265c40bf42ca ovn-installed in OVS Dec 2 05:09:11 localhost ovn_controller[154505]: 2025-12-02T10:09:11Z|00358|binding|INFO|Setting lport b4439fe1-b6e1-4982-a031-265c40bf42ca up in Southbound Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.611 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost journal[230136]: ethtool ioctl error on tapb4439fe1-b6: No such device Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.642 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:11 localhost nova_compute[281854]: 2025-12-02 10:09:11.667 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:12 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:12.331 2 INFO neutron.agent.securitygroups_rpc [None req-9fd8a609-568a-4b57-8025-f518255ff815 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']#033[00m Dec 2 05:09:12 localhost podman[321510]: Dec 2 05:09:12 localhost podman[321510]: 2025-12-02 10:09:12.43238391 +0000 UTC m=+0.087847482 container create 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:09:12 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:12.452 2 INFO neutron.agent.securitygroups_rpc [None req-4d3ff4f1-7788-4535-9205-e4647a2c3ad1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:12.457 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:12 localhost systemd[1]: Started libpod-conmon-39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63.scope. Dec 2 05:09:12 localhost podman[321510]: 2025-12-02 10:09:12.389733844 +0000 UTC m=+0.045197466 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:12 localhost systemd[1]: Started libcrun container. Dec 2 05:09:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/566de893ebe6e92e509fb823e244732192be440f9e7f7b105f2bd6bfda91f749/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:12 localhost podman[321510]: 2025-12-02 10:09:12.508790726 +0000 UTC m=+0.164254268 container init 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:12 localhost podman[321510]: 2025-12-02 10:09:12.520297133 +0000 UTC m=+0.175760695 container start 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:09:12 localhost dnsmasq[321528]: started, version 2.85 cachesize 150 Dec 2 05:09:12 localhost dnsmasq[321528]: DNS service limited to local subnets Dec 2 05:09:12 localhost dnsmasq[321528]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:12 localhost dnsmasq[321528]: warning: no upstream servers configured Dec 2 05:09:12 localhost dnsmasq-dhcp[321528]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:12 localhost dnsmasq[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:12 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:12 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:12.587 263406 INFO neutron.agent.dhcp.agent [None req-9b4a9558-8b26-4cd9-bc17-e585f6ba3834 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:10Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=76e0cc3c-a1f7-44b3-8218-4e157a8dda23, ip_allocation=immediate, mac_address=fa:16:3e:3c:a2:56, name=tempest-NetworksTestDHCPv6-314118676, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=40, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['3bc4be47-66b2-4149-ab7b-9ea605321c8c'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:08Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2134, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:11Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:12.657 263406 INFO neutron.agent.dhcp.agent [None req-37f788d9-fbc4-4b54-9317-d5e82251a062 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:12 localhost dnsmasq[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:09:12 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:12 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:12 localhost podman[321545]: 2025-12-02 10:09:12.785444209 +0000 UTC m=+0.064437238 container kill 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:12.977 263406 INFO neutron.agent.dhcp.agent [None req-2b7b6b4e-b5a9-4583-9e28-2e4db20559bd - - - - - -] DHCP configuration for ports {'76e0cc3c-a1f7-44b3-8218-4e157a8dda23'} is completed#033[00m Dec 2 05:09:13 localhost dnsmasq[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:13 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:13 localhost dnsmasq-dhcp[321528]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:13 localhost podman[321584]: 2025-12-02 10:09:13.109279048 +0000 UTC m=+0.053219719 container kill 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:14.134 263406 INFO neutron.agent.linux.ip_lib [None req-d77cc7dd-36e8-4708-a6e9-086c5a8a2bde - - - - - -] Device tapcbda893e-6a cannot be used as it has no MAC address#033[00m Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.164 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost kernel: device tapcbda893e-6a entered promiscuous mode Dec 2 05:09:14 localhost NetworkManager[5965]: [1764670154.1707] manager: (tapcbda893e-6a): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00359|binding|INFO|Claiming lport cbda893e-6a95-4f21-b53a-4734c24663e0 for this chassis. Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00360|binding|INFO|cbda893e-6a95-4f21-b53a-4734c24663e0: Claiming unknown Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.171 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.182 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e13ed0b0-82be-499b-b8af-a15d85a02df9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e13ed0b0-82be-499b-b8af-a15d85a02df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e53e7092-6e4c-49fc-9858-fc71f27a93fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cbda893e-6a95-4f21-b53a-4734c24663e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.184 160221 INFO neutron.agent.ovn.metadata.agent [-] Port cbda893e-6a95-4f21-b53a-4734c24663e0 in datapath e13ed0b0-82be-499b-b8af-a15d85a02df9 bound to our chassis#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.185 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e13ed0b0-82be-499b-b8af-a15d85a02df9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.190 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.191 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[088c9c1c-3bd8-4158-bd45-e37ab15bfee2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00361|binding|INFO|Setting lport cbda893e-6a95-4f21-b53a-4734c24663e0 ovn-installed in OVS Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00362|binding|INFO|Setting lport cbda893e-6a95-4f21-b53a-4734c24663e0 up in Southbound Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.194 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.208 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost systemd[1]: tmp-crun.zHZ4qY.mount: Deactivated successfully. Dec 2 05:09:14 localhost podman[321628]: 2025-12-02 10:09:14.224341605 +0000 UTC m=+0.080469246 container kill 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:14 localhost dnsmasq[321528]: exiting on receipt of SIGTERM Dec 2 05:09:14 localhost systemd[1]: libpod-39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63.scope: Deactivated successfully. Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.249 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.272 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost podman[321647]: 2025-12-02 10:09:14.28311604 +0000 UTC m=+0.049213032 container died 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:09:14 localhost podman[321647]: 2025-12-02 10:09:14.307212923 +0000 UTC m=+0.073309895 container cleanup 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:09:14 localhost systemd[1]: libpod-conmon-39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63.scope: Deactivated successfully. Dec 2 05:09:14 localhost podman[321655]: 2025-12-02 10:09:14.382028807 +0000 UTC m=+0.128062924 container remove 39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00363|binding|INFO|Releasing lport b4439fe1-b6e1-4982-a031-265c40bf42ca from this chassis (sb_readonly=0) Dec 2 05:09:14 localhost ovn_controller[154505]: 2025-12-02T10:09:14Z|00364|binding|INFO|Setting lport b4439fe1-b6e1-4982-a031-265c40bf42ca down in Southbound Dec 2 05:09:14 localhost kernel: device tapb4439fe1-b6 left promiscuous mode Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.396 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.416 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb8:8fc3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b4439fe1-b6e1-4982-a031-265c40bf42ca) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.418 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b4439fe1-b6e1-4982-a031-265c40bf42ca in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:14 localhost nova_compute[281854]: 2025-12-02 10:09:14.420 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.421 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:14 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:14.422 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb3318b-366a-4096-bffe-caa4cdb772d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:14 localhost systemd[1]: var-lib-containers-storage-overlay-566de893ebe6e92e509fb823e244732192be440f9e7f7b105f2bd6bfda91f749-merged.mount: Deactivated successfully. Dec 2 05:09:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39cf601d7c30ccaa89aa40f12098b88e5a3cf6674de0305e23e1be3bfba25c63-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:14 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:14.865 263406 INFO neutron.agent.dhcp.agent [None req-ef039e85-1a3a-4527-a735-198fd421120b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:15 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:15.186 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:15 localhost podman[321726]: Dec 2 05:09:15 localhost podman[321726]: 2025-12-02 10:09:15.290086675 +0000 UTC m=+0.101170217 container create c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:15 localhost systemd[1]: Started libpod-conmon-c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2.scope. Dec 2 05:09:15 localhost podman[321726]: 2025-12-02 10:09:15.237291468 +0000 UTC m=+0.048375020 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:15 localhost systemd[1]: Started libcrun container. Dec 2 05:09:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061e35e8e2271744f22f5b8b10d46d8b4670aa5cc18d5b13debaf7e3b4be708c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:15 localhost podman[321726]: 2025-12-02 10:09:15.364173619 +0000 UTC m=+0.175257161 container init c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:15 localhost podman[321726]: 2025-12-02 10:09:15.373494008 +0000 UTC m=+0.184577550 container start c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:15 localhost dnsmasq[321744]: started, version 2.85 cachesize 150 Dec 2 05:09:15 localhost dnsmasq[321744]: DNS service limited to local subnets Dec 2 05:09:15 localhost dnsmasq[321744]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:15 localhost dnsmasq[321744]: warning: no upstream servers configured Dec 2 05:09:15 localhost dnsmasq-dhcp[321744]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:15 localhost dnsmasq[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/addn_hosts - 0 addresses Dec 2 05:09:15 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/host Dec 2 05:09:15 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/opts Dec 2 05:09:15 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:15.568 263406 INFO neutron.agent.dhcp.agent [None req-c0a58fc7-ea7b-4d18-97de-d664fbcf8601 - - - - - -] DHCP configuration for ports {'5ef446fe-4b56-442f-96a4-11d2e9927b0a'} is completed#033[00m Dec 2 05:09:15 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:15.918 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:16 localhost nova_compute[281854]: 2025-12-02 10:09:16.002 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost nova_compute[281854]: 2025-12-02 10:09:16.064 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 e155: 6 total, 6 up, 6 in Dec 2 05:09:18 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:18.014 2 INFO neutron.agent.securitygroups_rpc [None req-1b7dd085-a5c1-4a81-bd02-4cabc7845a6f f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:09:18 localhost podman[321745]: 2025-12-02 10:09:18.452697017 +0000 UTC m=+0.089940968 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Dec 2 05:09:18 localhost podman[321745]: 2025-12-02 10:09:18.494395108 +0000 UTC m=+0.131639069 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:18 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:09:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:09:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2213891855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:09:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:09:19 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2213891855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:09:19 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:19.424 2 INFO neutron.agent.securitygroups_rpc [None req-278e88f3-562a-4f7b-8f10-c3e2bfd4ee2e 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']#033[00m Dec 2 05:09:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:19.478 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:19Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ff9379c5-c4de-4f4c-9009-a3c2753f59eb, ip_allocation=immediate, mac_address=fa:16:3e:85:d4:94, name=tempest-RoutersTest-290673381, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:11Z, description=, dns_domain=, id=e13ed0b0-82be-499b-b8af-a15d85a02df9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-286923336, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=11794, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2135, status=ACTIVE, subnets=['b8a39615-c67f-42f5-884d-1d6d18d7847a'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:12Z, vlan_transparent=None, network_id=e13ed0b0-82be-499b-b8af-a15d85a02df9, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['32471057-4d02-424a-9e3e-19629ab1677d'], standard_attr_id=2179, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:19Z on network e13ed0b0-82be-499b-b8af-a15d85a02df9#033[00m Dec 2 05:09:19 localhost dnsmasq[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/addn_hosts - 1 addresses Dec 2 05:09:19 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/host Dec 2 05:09:19 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/opts Dec 2 05:09:19 localhost podman[321782]: 2025-12-02 10:09:19.720712209 +0000 UTC m=+0.055678325 container kill c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:19 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:19.740 2 INFO neutron.agent.securitygroups_rpc [None req-e9fc3440-8683-40fd-946b-446e84f960a4 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:19.770 263406 INFO neutron.agent.linux.ip_lib [None req-cf694e92-ab83-4ed8-bd35-b32a57da5eb8 - - - - - -] Device tap324c0357-f6 cannot be used as it has no MAC address#033[00m Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.804 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost kernel: device tap324c0357-f6 entered promiscuous mode Dec 2 05:09:19 localhost NetworkManager[5965]: [1764670159.8119] manager: (tap324c0357-f6): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Dec 2 05:09:19 localhost ovn_controller[154505]: 2025-12-02T10:09:19Z|00365|binding|INFO|Claiming lport 324c0357-f680-4288-a358-1a1dfc9002b3 for this chassis. Dec 2 05:09:19 localhost ovn_controller[154505]: 2025-12-02T10:09:19Z|00366|binding|INFO|324c0357-f680-4288-a358-1a1dfc9002b3: Claiming unknown Dec 2 05:09:19 localhost systemd-udevd[321807]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.812 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost ovn_controller[154505]: 2025-12-02T10:09:19Z|00367|binding|INFO|Setting lport 324c0357-f680-4288-a358-1a1dfc9002b3 up in Southbound Dec 2 05:09:19 localhost ovn_controller[154505]: 2025-12-02T10:09:19Z|00368|binding|INFO|Setting lport 324c0357-f680-4288-a358-1a1dfc9002b3 ovn-installed in OVS Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.822 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.825 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:19.821 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe70:68f5/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=324c0357-f680-4288-a358-1a1dfc9002b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:19.823 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 324c0357-f680-4288-a358-1a1dfc9002b3 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:19.826 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port ae9d2580-403f-4ce2-a075-d2b7d708275b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:19.826 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:19.827 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[617969b8-b118-4ec5-b423-0ef5a6f9dcb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.848 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost journal[230136]: ethtool ioctl error on tap324c0357-f6: No such device Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.908 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:19 localhost nova_compute[281854]: 2025-12-02 10:09:19.936 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:20.044 263406 INFO neutron.agent.dhcp.agent [None req-e85393e9-c731-4b60-8094-ea395ce72bb0 - - - - - -] DHCP configuration for ports {'ff9379c5-c4de-4f4c-9009-a3c2753f59eb'} is completed#033[00m Dec 2 05:09:20 localhost podman[321883]: Dec 2 05:09:20 localhost podman[321883]: 2025-12-02 10:09:20.76656963 +0000 UTC m=+0.081732929 container create 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:09:20 localhost systemd[1]: Started libpod-conmon-367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d.scope. Dec 2 05:09:20 localhost podman[321883]: 2025-12-02 10:09:20.724692744 +0000 UTC m=+0.039856083 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:20 localhost systemd[1]: tmp-crun.e2I5B1.mount: Deactivated successfully. Dec 2 05:09:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:09:20 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1785879792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:09:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:09:20 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1785879792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:09:20 localhost systemd[1]: Started libcrun container. Dec 2 05:09:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283af53ad12a8290739b2dcc7eba4c6bcf76cb8c5e9c11277a0dcadc0c44c56f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:20 localhost podman[321883]: 2025-12-02 10:09:20.865118246 +0000 UTC m=+0.180281525 container init 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:09:20 localhost podman[321883]: 2025-12-02 10:09:20.871727882 +0000 UTC m=+0.186891161 container start 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:20 localhost dnsmasq[321902]: started, version 2.85 cachesize 150 Dec 2 05:09:20 localhost dnsmasq[321902]: DNS service limited to local subnets Dec 2 05:09:20 localhost dnsmasq[321902]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:20 localhost dnsmasq[321902]: warning: no upstream servers configured Dec 2 05:09:20 localhost dnsmasq[321902]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:20.938 263406 INFO neutron.agent.dhcp.agent [None req-cf694e92-ab83-4ed8-bd35-b32a57da5eb8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:19Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0e2b3ab0-808b-4d9a-adbe-b5e67180b17d, ip_allocation=immediate, mac_address=fa:16:3e:68:9f:3f, name=tempest-NetworksTestDHCPv6-645243242, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=42, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['19bc6d19-aeb3-4f9d-8660-0603a2e336bc'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:14Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2180, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:19Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:21 localhost nova_compute[281854]: 2025-12-02 10:09:21.003 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:21.010 263406 INFO neutron.agent.dhcp.agent [None req-9d90c008-f4df-47a7-9d2e-c8ec48cc7e38 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:21 localhost nova_compute[281854]: 2025-12-02 10:09:21.066 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:21 localhost dnsmasq[321902]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:09:21 localhost podman[321921]: 2025-12-02 10:09:21.140405952 +0000 UTC m=+0.058806258 container kill 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:09:21 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:21.178 2 INFO neutron.agent.securitygroups_rpc [None req-8371118d-5c83-45c5-bfa7-f542b4f1df3f 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:21.329 263406 INFO neutron.agent.dhcp.agent [None req-3f718340-16a8-4d1c-a551-b7c320347f95 - - - - - -] DHCP configuration for ports {'0e2b3ab0-808b-4d9a-adbe-b5e67180b17d'} is completed#033[00m Dec 2 05:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:09:21 localhost dnsmasq[321902]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:21 localhost podman[321968]: 2025-12-02 10:09:21.468835155 +0000 UTC m=+0.062640101 container kill 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:21 localhost podman[321956]: 2025-12-02 10:09:21.456452375 +0000 UTC m=+0.086375303 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:21 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:21.511 2 INFO neutron.agent.securitygroups_rpc [None req-10f867de-2584-4ed7-a0e8-fb9276ac33a8 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:21 localhost podman[321956]: 2025-12-02 10:09:21.540284089 +0000 UTC m=+0.170207047 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:09:21 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:09:21 localhost dnsmasq[321902]: exiting on receipt of SIGTERM Dec 2 05:09:21 localhost podman[322014]: 2025-12-02 10:09:21.900902469 +0000 UTC m=+0.071774664 container kill 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:21 localhost systemd[1]: tmp-crun.rmCV2Y.mount: Deactivated successfully. Dec 2 05:09:21 localhost systemd[1]: libpod-367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d.scope: Deactivated successfully. Dec 2 05:09:21 localhost podman[322028]: 2025-12-02 10:09:21.989825089 +0000 UTC m=+0.075085712 container died 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:22 localhost podman[322028]: 2025-12-02 10:09:22.026484796 +0000 UTC m=+0.111745399 container cleanup 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:09:22 localhost systemd[1]: libpod-conmon-367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d.scope: Deactivated successfully. Dec 2 05:09:22 localhost podman[322035]: 2025-12-02 10:09:22.070638862 +0000 UTC m=+0.136849517 container remove 367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:22 localhost ovn_controller[154505]: 2025-12-02T10:09:22Z|00369|binding|INFO|Releasing lport 324c0357-f680-4288-a358-1a1dfc9002b3 from this chassis (sb_readonly=0) Dec 2 05:09:22 localhost ovn_controller[154505]: 2025-12-02T10:09:22Z|00370|binding|INFO|Setting lport 324c0357-f680-4288-a358-1a1dfc9002b3 down in Southbound Dec 2 05:09:22 localhost kernel: device tap324c0357-f6 left promiscuous mode Dec 2 05:09:22 localhost nova_compute[281854]: 2025-12-02 10:09:22.084 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:22.095 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe70:68f5/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=324c0357-f680-4288-a358-1a1dfc9002b3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:22.097 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 324c0357-f680-4288-a358-1a1dfc9002b3 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:22.099 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:22.100 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[84026192-a26d-457f-a86e-767350efa6b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:22 localhost nova_compute[281854]: 2025-12-02 10:09:22.104 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:22.358 263406 INFO neutron.agent.dhcp.agent [None req-96c827b3-f68e-4cf8-b99a-86434dedc63c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:22 localhost systemd[1]: var-lib-containers-storage-overlay-283af53ad12a8290739b2dcc7eba4c6bcf76cb8c5e9c11277a0dcadc0c44c56f-merged.mount: Deactivated successfully. Dec 2 05:09:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-367e3bd9c0d8614352308ed6c9b0cbb4f6e0d65f91c4d6f155eee6b363e8052d-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:22 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:23 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:23.060 2 INFO neutron.agent.securitygroups_rpc [None req-7bc34f63-f96a-4396-b70c-07601d07dee2 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:09:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:09:23 localhost podman[322058]: 2025-12-02 10:09:23.44997624 +0000 UTC m=+0.088971683 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:09:23 localhost podman[322058]: 2025-12-02 10:09:23.456858913 +0000 UTC m=+0.095854366 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:09:23 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:09:23 localhost podman[322057]: 2025-12-02 10:09:23.498273827 +0000 UTC m=+0.136748476 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 2 05:09:23 localhost podman[322057]: 2025-12-02 10:09:23.511870549 +0000 UTC m=+0.150345248 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, managed_by=edpm_ansible) Dec 2 05:09:23 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:09:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:23.752 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:23 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:23.773 2 INFO neutron.agent.securitygroups_rpc [None req-ef3a9568-c379-4b2a-a06d-b347ad68d0c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:23 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:23.834 2 INFO neutron.agent.securitygroups_rpc [None req-f87c0fb9-70c4-4316-8fe5-2d1d482ef952 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:23 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:23.896 263406 INFO neutron.agent.linux.ip_lib [None req-3c460768-e00a-4238-817c-5ffda8c4d9c8 - - - - - -] Device tap317acccc-f5 cannot be used as it has no MAC address#033[00m Dec 2 05:09:23 localhost nova_compute[281854]: 2025-12-02 10:09:23.956 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:23 localhost kernel: device tap317acccc-f5 entered promiscuous mode Dec 2 05:09:23 localhost NetworkManager[5965]: [1764670163.9624] manager: (tap317acccc-f5): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Dec 2 05:09:23 localhost nova_compute[281854]: 2025-12-02 10:09:23.962 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:23 localhost ovn_controller[154505]: 2025-12-02T10:09:23Z|00371|binding|INFO|Claiming lport 317acccc-f5e4-452d-a107-edeeec553e45 for this chassis. Dec 2 05:09:23 localhost ovn_controller[154505]: 2025-12-02T10:09:23Z|00372|binding|INFO|317acccc-f5e4-452d-a107-edeeec553e45: Claiming unknown Dec 2 05:09:23 localhost systemd-udevd[322108]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:23 localhost ovn_controller[154505]: 2025-12-02T10:09:23Z|00373|binding|INFO|Setting lport 317acccc-f5e4-452d-a107-edeeec553e45 ovn-installed in OVS Dec 2 05:09:23 localhost nova_compute[281854]: 2025-12-02 10:09:23.976 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:23 localhost ovn_controller[154505]: 2025-12-02T10:09:23Z|00374|binding|INFO|Setting lport 317acccc-f5e4-452d-a107-edeeec553e45 up in Southbound Dec 2 05:09:23 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:23.986 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2d:2b7a/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=317acccc-f5e4-452d-a107-edeeec553e45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:23 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:23 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:23.989 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 317acccc-f5e4-452d-a107-edeeec553e45 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:23 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:23.991 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 036480a6-e951-4d35-a3ac-a50bf3818be6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:23 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:23.991 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:23 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:23 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:23.993 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1a422661-540f-4968-85ca-653ae219f99e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:23 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost nova_compute[281854]: 2025-12-02 10:09:23.998 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:24 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost journal[230136]: ethtool ioctl error on tap317acccc-f5: No such device Dec 2 05:09:24 localhost nova_compute[281854]: 2025-12-02 10:09:24.042 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:24 localhost nova_compute[281854]: 2025-12-02 10:09:24.070 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:24 localhost podman[322215]: Dec 2 05:09:24 localhost podman[322215]: 2025-12-02 10:09:24.882234177 +0000 UTC m=+0.099338617 container create a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:09:24 localhost systemd[1]: Started libpod-conmon-a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4.scope. Dec 2 05:09:24 localhost podman[322215]: 2025-12-02 10:09:24.834589218 +0000 UTC m=+0.051693698 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:24 localhost systemd[1]: Started libcrun container. Dec 2 05:09:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a3660f35d8be91ceba0dc28d4805873619e4df442f680ea55f00f5be81d6df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:24 localhost podman[322215]: 2025-12-02 10:09:24.946965412 +0000 UTC m=+0.164069822 container init a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:09:24 localhost podman[322215]: 2025-12-02 10:09:24.954218356 +0000 UTC m=+0.171322796 container start a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:24 localhost dnsmasq[322247]: started, version 2.85 cachesize 150 Dec 2 05:09:24 localhost dnsmasq[322247]: DNS service limited to local subnets Dec 2 05:09:24 localhost dnsmasq[322247]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:24 localhost dnsmasq[322247]: warning: no upstream servers configured Dec 2 05:09:24 localhost dnsmasq-dhcp[322247]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:24 localhost dnsmasq[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:24 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:24 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:24 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:24.984 2 INFO neutron.agent.securitygroups_rpc [None req-b3aa5b43-46a5-4652-aa07-2f62355aecf1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:25.013 263406 INFO neutron.agent.dhcp.agent [None req-3c460768-e00a-4238-817c-5ffda8c4d9c8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:23Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f6ac3ef5-c653-4eda-b061-848a27409fb0, ip_allocation=immediate, mac_address=fa:16:3e:b8:be:3d, name=tempest-NetworksTestDHCPv6-1258162334, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=44, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['bec535fc-c757-4086-bb74-960704a071e4'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:21Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2201, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:23Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:25.096 263406 INFO neutron.agent.dhcp.agent [None req-8e1a68e1-ee7f-4c8e-9636-da2fc072c6b2 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:25 localhost dnsmasq[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:09:25 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:25 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:25 localhost podman[322272]: 2025-12-02 10:09:25.204728631 +0000 UTC m=+0.067025007 container kill a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:25.389 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:19Z, description=, device_id=1e3f609d-8d11-4c97-acad-11c6a919197a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ff9379c5-c4de-4f4c-9009-a3c2753f59eb, ip_allocation=immediate, mac_address=fa:16:3e:85:d4:94, name=tempest-RoutersTest-290673381, network_id=e13ed0b0-82be-499b-b8af-a15d85a02df9, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['32471057-4d02-424a-9e3e-19629ab1677d'], standard_attr_id=2179, status=ACTIVE, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:22Z on network e13ed0b0-82be-499b-b8af-a15d85a02df9#033[00m Dec 2 05:09:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:09:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:09:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:25.432 263406 INFO neutron.agent.dhcp.agent [None req-7309156d-d815-42c1-9594-612866a2ca84 - - - - - -] DHCP configuration for ports {'f6ac3ef5-c653-4eda-b061-848a27409fb0'} is completed#033[00m Dec 2 05:09:25 localhost dnsmasq[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:25 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:25 localhost dnsmasq-dhcp[322247]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:25 localhost podman[322331]: 2025-12-02 10:09:25.556496666 +0000 UTC m=+0.090821312 container kill a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:25 localhost dnsmasq[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/addn_hosts - 1 addresses Dec 2 05:09:25 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/host Dec 2 05:09:25 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/opts Dec 2 05:09:25 localhost podman[322375]: 2025-12-02 10:09:25.6962397 +0000 UTC m=+0.063234486 container kill c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:25 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:25.979 263406 INFO neutron.agent.dhcp.agent [None req-b34c7367-ce76-40d6-9111-bfa3b717f280 - - - - - -] DHCP configuration for ports {'ff9379c5-c4de-4f4c-9009-a3c2753f59eb'} is completed#033[00m Dec 2 05:09:26 localhost nova_compute[281854]: 2025-12-02 10:09:26.007 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:26 localhost nova_compute[281854]: 2025-12-02 10:09:26.068 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:26.131 2 INFO neutron.agent.securitygroups_rpc [None req-28c3366c-1c91-49f5-b694-3c934cb049e5 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:26 localhost dnsmasq[322247]: exiting on receipt of SIGTERM Dec 2 05:09:26 localhost podman[322419]: 2025-12-02 10:09:26.147967098 +0000 UTC m=+0.068603700 container kill a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:09:26 localhost systemd[1]: libpod-a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4.scope: Deactivated successfully. Dec 2 05:09:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:26 localhost podman[322433]: 2025-12-02 10:09:26.216426872 +0000 UTC m=+0.056498667 container died a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:26 localhost podman[322433]: 2025-12-02 10:09:26.251871427 +0000 UTC m=+0.091943192 container cleanup a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:26 localhost systemd[1]: libpod-conmon-a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4.scope: Deactivated successfully. Dec 2 05:09:26 localhost podman[322435]: 2025-12-02 10:09:26.314000812 +0000 UTC m=+0.144593864 container remove a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:26 localhost ovn_controller[154505]: 2025-12-02T10:09:26Z|00375|binding|INFO|Releasing lport 317acccc-f5e4-452d-a107-edeeec553e45 from this chassis (sb_readonly=0) Dec 2 05:09:26 localhost nova_compute[281854]: 2025-12-02 10:09:26.326 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:26 localhost ovn_controller[154505]: 2025-12-02T10:09:26Z|00376|binding|INFO|Setting lport 317acccc-f5e4-452d-a107-edeeec553e45 down in Southbound Dec 2 05:09:26 localhost kernel: device tap317acccc-f5 left promiscuous mode Dec 2 05:09:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:26.335 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2d:2b7a/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=317acccc-f5e4-452d-a107-edeeec553e45) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:26.338 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 317acccc-f5e4-452d-a107-edeeec553e45 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:26.340 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:26.341 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[23217173-90ea-4a58-b254-e08eeec010f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:26 localhost nova_compute[281854]: 2025-12-02 10:09:26.356 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:26.379 2 INFO neutron.agent.securitygroups_rpc [None req-ec4a6adc-d68b-4418-9c41-c326e9a3fc34 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']#033[00m Dec 2 05:09:26 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:26.574 263406 INFO neutron.agent.dhcp.agent [None req-e6581cb2-aacd-4cba-bbb4-23fbe9cc03c5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:26 localhost systemd[1]: var-lib-containers-storage-overlay-e0a3660f35d8be91ceba0dc28d4805873619e4df442f680ea55f00f5be81d6df-merged.mount: Deactivated successfully. Dec 2 05:09:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a352e5b2dc38eefd424d28c0f2c8258cdb09415dc850c6ca1c5fd5d018e425e4-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:26 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:27 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:27.368 2 INFO neutron.agent.securitygroups_rpc [None req-a7f54859-efcd-4ecf-b40a-33f0bd3f4545 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:09:27 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:27.612 2 INFO neutron.agent.securitygroups_rpc [None req-d5002157-4534-49a0-a135-4c64a8485ed7 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']#033[00m Dec 2 05:09:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:27.930 263406 INFO neutron.agent.linux.ip_lib [None req-72c8040f-566b-4a27-b66c-9270e4271be9 - - - - - -] Device tap0f726015-c9 cannot be used as it has no MAC address#033[00m Dec 2 05:09:27 localhost dnsmasq[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/addn_hosts - 0 addresses Dec 2 05:09:27 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/host Dec 2 05:09:27 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/e13ed0b0-82be-499b-b8af-a15d85a02df9/opts Dec 2 05:09:27 localhost podman[322484]: 2025-12-02 10:09:27.936469689 +0000 UTC m=+0.059896967 container kill c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:27 localhost nova_compute[281854]: 2025-12-02 10:09:27.961 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost kernel: device tap0f726015-c9 entered promiscuous mode Dec 2 05:09:27 localhost NetworkManager[5965]: [1764670167.9710] manager: (tap0f726015-c9): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Dec 2 05:09:27 localhost ovn_controller[154505]: 2025-12-02T10:09:27Z|00377|binding|INFO|Claiming lport 0f726015-c9a1-4dea-9271-1b6ceac095a9 for this chassis. Dec 2 05:09:27 localhost ovn_controller[154505]: 2025-12-02T10:09:27Z|00378|binding|INFO|0f726015-c9a1-4dea-9271-1b6ceac095a9: Claiming unknown Dec 2 05:09:27 localhost systemd-udevd[322503]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:27 localhost nova_compute[281854]: 2025-12-02 10:09:27.979 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost ovn_controller[154505]: 2025-12-02T10:09:27Z|00379|binding|INFO|Setting lport 0f726015-c9a1-4dea-9271-1b6ceac095a9 up in Southbound Dec 2 05:09:27 localhost ovn_controller[154505]: 2025-12-02T10:09:27Z|00380|binding|INFO|Setting lport 0f726015-c9a1-4dea-9271-1b6ceac095a9 ovn-installed in OVS Dec 2 05:09:27 localhost nova_compute[281854]: 2025-12-02 10:09:27.985 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost nova_compute[281854]: 2025-12-02 10:09:27.987 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:27.985 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f726015-c9a1-4dea-9271-1b6ceac095a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:27.988 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0f726015-c9a1-4dea-9271-1b6ceac095a9 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:27.990 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port e0ba1cdf-a582-40ed-9ec2-5ecac58c1001 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:27.991 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:27 localhost nova_compute[281854]: 2025-12-02 10:09:27.992 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:27.992 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[47c41184-ee3c-4f1d-9e4f-9725a8a21018]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:28 localhost nova_compute[281854]: 2025-12-02 10:09:28.027 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:28 localhost nova_compute[281854]: 2025-12-02 10:09:28.080 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:28 localhost nova_compute[281854]: 2025-12-02 10:09:28.109 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:28 localhost ovn_controller[154505]: 2025-12-02T10:09:28Z|00381|binding|INFO|Releasing lport cbda893e-6a95-4f21-b53a-4734c24663e0 from this chassis (sb_readonly=0) Dec 2 05:09:28 localhost ovn_controller[154505]: 2025-12-02T10:09:28Z|00382|binding|INFO|Setting lport cbda893e-6a95-4f21-b53a-4734c24663e0 down in Southbound Dec 2 05:09:28 localhost nova_compute[281854]: 2025-12-02 10:09:28.160 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:28 localhost kernel: device tapcbda893e-6a left promiscuous mode Dec 2 05:09:28 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:28.172 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e13ed0b0-82be-499b-b8af-a15d85a02df9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e13ed0b0-82be-499b-b8af-a15d85a02df9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e53e7092-6e4c-49fc-9858-fc71f27a93fb, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cbda893e-6a95-4f21-b53a-4734c24663e0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:28 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:28.174 160221 INFO neutron.agent.ovn.metadata.agent [-] Port cbda893e-6a95-4f21-b53a-4734c24663e0 in datapath e13ed0b0-82be-499b-b8af-a15d85a02df9 unbound from our chassis#033[00m Dec 2 05:09:28 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:28.176 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e13ed0b0-82be-499b-b8af-a15d85a02df9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:28 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:28.177 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a6859549-6394-4a64-9441-4331f5fcec1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:28 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:28.195 2 INFO neutron.agent.securitygroups_rpc [None req-31d4af97-7fb6-4706-a5b2-299b30ee98fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:28 localhost nova_compute[281854]: 2025-12-02 10:09:28.202 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:28 localhost podman[322565]: Dec 2 05:09:28 localhost podman[322565]: 2025-12-02 10:09:28.935813731 +0000 UTC m=+0.080313251 container create f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:09:28 localhost systemd[1]: Started libpod-conmon-f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1.scope. Dec 2 05:09:28 localhost systemd[1]: Started libcrun container. Dec 2 05:09:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0082650ca85757f463f17513ece17c6de93625b11703d74e781dbc5f1a1c4255/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:28 localhost podman[322565]: 2025-12-02 10:09:28.998948604 +0000 UTC m=+0.143448104 container init f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:09:28 localhost podman[322565]: 2025-12-02 10:09:28.899865663 +0000 UTC m=+0.044365183 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:29 localhost podman[322565]: 2025-12-02 10:09:29.011669453 +0000 UTC m=+0.156168953 container start f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:29 localhost dnsmasq[322584]: started, version 2.85 cachesize 150 Dec 2 05:09:29 localhost dnsmasq[322584]: DNS service limited to local subnets Dec 2 05:09:29 localhost dnsmasq[322584]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:29 localhost dnsmasq[322584]: warning: no upstream servers configured Dec 2 05:09:29 localhost dnsmasq-dhcp[322584]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:29 localhost dnsmasq[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:29 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:29 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:29.077 263406 INFO neutron.agent.dhcp.agent [None req-72c8040f-566b-4a27-b66c-9270e4271be9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:27Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6bc3a2cd-c417-4b5d-9595-682f73217846, ip_allocation=immediate, mac_address=fa:16:3e:3e:7c:03, name=tempest-NetworksTestDHCPv6-1742388920, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=46, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['3b8e86d5-811c-4d15-ac08-ca299a1dcf8d'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:26Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2218, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:27Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:29 localhost dnsmasq[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 1 addresses Dec 2 05:09:29 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:29 localhost podman[322602]: 2025-12-02 10:09:29.277814055 +0000 UTC m=+0.054619546 container kill f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:29 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:29 localhost systemd[1]: tmp-crun.wQmwH8.mount: Deactivated successfully. Dec 2 05:09:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:30.224 263406 INFO neutron.agent.dhcp.agent [None req-a2769c21-c1c1-49b9-ac11-cb0ef3ad6959 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:30.417 263406 INFO neutron.agent.dhcp.agent [None req-6740c1e5-40a3-467e-8ae4-367d22a6f53f - - - - - -] DHCP configuration for ports {'6bc3a2cd-c417-4b5d-9595-682f73217846'} is completed#033[00m Dec 2 05:09:30 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:30.731 2 INFO neutron.agent.securitygroups_rpc [None req-b3ef0962-bb50-4849-b9de-83492a397177 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:31 localhost nova_compute[281854]: 2025-12-02 10:09:31.009 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:31 localhost nova_compute[281854]: 2025-12-02 10:09:31.069 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:31 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:31.262 2 INFO neutron.agent.securitygroups_rpc [None req-b62fea3d-778e-4171-9633-628f1b789028 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:31 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:31 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:31 localhost dnsmasq[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:31 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:31 localhost podman[322638]: 2025-12-02 10:09:31.514801139 +0000 UTC m=+0.064611283 container kill f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:09:31 localhost dnsmasq-dhcp[322584]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:09:31 localhost podman[322653]: 2025-12-02 10:09:31.646880808 +0000 UTC m=+0.103891459 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:31 localhost podman[322653]: 2025-12-02 10:09:31.662763691 +0000 UTC m=+0.119774322 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 05:09:31 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:09:31 localhost dnsmasq[321744]: exiting on receipt of SIGTERM Dec 2 05:09:31 localhost podman[322697]: 2025-12-02 10:09:31.944715285 +0000 UTC m=+0.062160127 container kill c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:09:31 localhost systemd[1]: libpod-c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2.scope: Deactivated successfully. Dec 2 05:09:32 localhost podman[322711]: 2025-12-02 10:09:32.026935276 +0000 UTC m=+0.059409264 container died c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:32 localhost podman[322711]: 2025-12-02 10:09:32.082901677 +0000 UTC m=+0.115375585 container remove c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e13ed0b0-82be-499b-b8af-a15d85a02df9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:32 localhost systemd[1]: libpod-conmon-c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2.scope: Deactivated successfully. Dec 2 05:09:32 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:32.122 263406 INFO neutron.agent.dhcp.agent [None req-5f17fb96-f330-4019-aa5e-95d7d9fba19e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:32 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:32.404 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:32 localhost dnsmasq[322584]: exiting on receipt of SIGTERM Dec 2 05:09:32 localhost podman[322752]: 2025-12-02 10:09:32.414175466 +0000 UTC m=+0.056154017 container kill f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:32 localhost systemd[1]: libpod-f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1.scope: Deactivated successfully. Dec 2 05:09:32 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:32.470 2 INFO neutron.agent.securitygroups_rpc [None req-1d81950f-2cd1-4171-b1b7-8ccf81612998 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:32 localhost podman[322765]: 2025-12-02 10:09:32.490408427 +0000 UTC m=+0.058672214 container died f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:32 localhost systemd[1]: tmp-crun.IH6egM.mount: Deactivated successfully. Dec 2 05:09:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:32 localhost systemd[1]: var-lib-containers-storage-overlay-061e35e8e2271744f22f5b8b10d46d8b4670aa5cc18d5b13debaf7e3b4be708c-merged.mount: Deactivated successfully. Dec 2 05:09:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c91cb890c763f2b817a9d03e47be27b8f7450bbd14e8105681e8ee0af033f1b2-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:32 localhost systemd[1]: run-netns-qdhcp\x2de13ed0b0\x2d82be\x2d499b\x2db8af\x2da15d85a02df9.mount: Deactivated successfully. Dec 2 05:09:32 localhost systemd[1]: var-lib-containers-storage-overlay-0082650ca85757f463f17513ece17c6de93625b11703d74e781dbc5f1a1c4255-merged.mount: Deactivated successfully. Dec 2 05:09:32 localhost podman[322765]: 2025-12-02 10:09:32.527513676 +0000 UTC m=+0.095777393 container cleanup f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:09:32 localhost systemd[1]: libpod-conmon-f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1.scope: Deactivated successfully. Dec 2 05:09:32 localhost podman[322766]: 2025-12-02 10:09:32.558510122 +0000 UTC m=+0.123345338 container remove f05328c70b6a8371db284d537df6a69435e16c191b198679fd9cab936d4007b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:09:32 localhost ovn_controller[154505]: 2025-12-02T10:09:32Z|00383|binding|INFO|Releasing lport 0f726015-c9a1-4dea-9271-1b6ceac095a9 from this chassis (sb_readonly=0) Dec 2 05:09:32 localhost ovn_controller[154505]: 2025-12-02T10:09:32Z|00384|binding|INFO|Setting lport 0f726015-c9a1-4dea-9271-1b6ceac095a9 down in Southbound Dec 2 05:09:32 localhost kernel: device tap0f726015-c9 left promiscuous mode Dec 2 05:09:32 localhost nova_compute[281854]: 2025-12-02 10:09:32.609 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:32.617 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f726015-c9a1-4dea-9271-1b6ceac095a9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:32.620 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0f726015-c9a1-4dea-9271-1b6ceac095a9 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:32.623 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:32.624 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[71d74e38-4c16-4f9b-9973-ed8640a1ace5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:32 localhost nova_compute[281854]: 2025-12-02 10:09:32.630 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:33 localhost ovn_controller[154505]: 2025-12-02T10:09:33Z|00385|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:09:33 localhost nova_compute[281854]: 2025-12-02 10:09:33.093 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:33.223 263406 INFO neutron.agent.dhcp.agent [None req-faa8b226-e2ab-4254-84c3-644b5b41fc72 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:33 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:33.507 2 INFO neutron.agent.securitygroups_rpc [None req-2ab86868-457f-4852-a90e-5fcf962a86b2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:33.705 2 INFO neutron.agent.securitygroups_rpc [None req-4574e29d-6803-42ec-b043-afe3e9e41c81 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:34 localhost openstack_network_exporter[242845]: ERROR 10:09:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:09:34 localhost openstack_network_exporter[242845]: ERROR 10:09:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:34 localhost openstack_network_exporter[242845]: ERROR 10:09:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:09:34 localhost openstack_network_exporter[242845]: ERROR 10:09:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:09:34 localhost openstack_network_exporter[242845]: Dec 2 05:09:34 localhost openstack_network_exporter[242845]: ERROR 10:09:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:09:34 localhost openstack_network_exporter[242845]: Dec 2 05:09:34 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:34.167 2 INFO neutron.agent.securitygroups_rpc [None req-808059d3-8bd0-4321-909f-628d45d51793 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']#033[00m Dec 2 05:09:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:34.221 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:34 localhost nova_compute[281854]: 2025-12-02 10:09:34.223 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:34.225 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:09:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:09:35 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:35.546 2 INFO neutron.agent.securitygroups_rpc [None req-3d05d8f5-1d82-449d-b4e5-f5f672622e53 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:35.670 263406 INFO neutron.agent.linux.ip_lib [None req-a7211d26-a01f-4c7f-9f57-27cc0c04f11f - - - - - -] Device tap0845b737-de cannot be used as it has no MAC address#033[00m Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.697 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost kernel: device tap0845b737-de entered promiscuous mode Dec 2 05:09:35 localhost ovn_controller[154505]: 2025-12-02T10:09:35Z|00386|binding|INFO|Claiming lport 0845b737-de43-4aef-bed0-c9dd0310ccc7 for this chassis. Dec 2 05:09:35 localhost NetworkManager[5965]: [1764670175.7036] manager: (tap0845b737-de): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.704 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost ovn_controller[154505]: 2025-12-02T10:09:35Z|00387|binding|INFO|0845b737-de43-4aef-bed0-c9dd0310ccc7: Claiming unknown Dec 2 05:09:35 localhost systemd-udevd[322803]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:35.718 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef2:2261/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0845b737-de43-4aef-bed0-c9dd0310ccc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:35.721 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0845b737-de43-4aef-bed0-c9dd0310ccc7 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:35.723 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 956d2c69-d1c9-44fb-9d8b-fcedfd67220a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:35.723 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:35.724 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[530815cf-9616-49ad-be0d-5b29488422bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.732 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost ovn_controller[154505]: 2025-12-02T10:09:35Z|00388|binding|INFO|Setting lport 0845b737-de43-4aef-bed0-c9dd0310ccc7 ovn-installed in OVS Dec 2 05:09:35 localhost ovn_controller[154505]: 2025-12-02T10:09:35Z|00389|binding|INFO|Setting lport 0845b737-de43-4aef-bed0-c9dd0310ccc7 up in Southbound Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.733 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost journal[230136]: ethtool ioctl error on tap0845b737-de: No such device Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.769 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:35 localhost nova_compute[281854]: 2025-12-02 10:09:35.791 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.011 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost podman[240799]: time="2025-12-02T10:09:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:09:36 localhost podman[240799]: @ - - [02/Dec/2025:10:09:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.070 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost podman[240799]: @ - - [02/Dec/2025:10:09:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18779 "" "Go-http-client/1.1" Dec 2 05:09:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:36 localhost podman[322874]: Dec 2 05:09:36 localhost podman[322874]: 2025-12-02 10:09:36.603658171 +0000 UTC m=+0.089939398 container create 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:09:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:36.621 2 INFO neutron.agent.securitygroups_rpc [None req-11576e49-abbc-421e-9ae1-ea6ee8281fd6 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:36 localhost systemd[1]: Started libpod-conmon-62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be.scope. Dec 2 05:09:36 localhost systemd[1]: Started libcrun container. Dec 2 05:09:36 localhost podman[322874]: 2025-12-02 10:09:36.565681469 +0000 UTC m=+0.051962736 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e045d2b0104541aa45b9552e205f851555854865b20457010235722c9179ccb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:36 localhost podman[322874]: 2025-12-02 10:09:36.678493015 +0000 UTC m=+0.164774252 container init 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:09:36 localhost podman[322874]: 2025-12-02 10:09:36.688688058 +0000 UTC m=+0.174969295 container start 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:36 localhost dnsmasq[322893]: started, version 2.85 cachesize 150 Dec 2 05:09:36 localhost dnsmasq[322893]: DNS service limited to local subnets Dec 2 05:09:36 localhost dnsmasq[322893]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:36 localhost dnsmasq[322893]: warning: no upstream servers configured Dec 2 05:09:36 localhost dnsmasq[322893]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:36.784 263406 INFO neutron.agent.linux.ip_lib [None req-ecfa2bea-906c-4dc7-b115-80f4dd5bed17 - - - - - -] Device tapcbaad62d-24 cannot be used as it has no MAC address#033[00m Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.809 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost kernel: device tapcbaad62d-24 entered promiscuous mode Dec 2 05:09:36 localhost NetworkManager[5965]: [1764670176.8155] manager: (tapcbaad62d-24): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Dec 2 05:09:36 localhost systemd-udevd[322805]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:36 localhost ovn_controller[154505]: 2025-12-02T10:09:36Z|00390|binding|INFO|Claiming lport cbaad62d-24be-4326-997a-688e88770b3c for this chassis. Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.815 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost ovn_controller[154505]: 2025-12-02T10:09:36Z|00391|binding|INFO|cbaad62d-24be-4326-997a-688e88770b3c: Claiming unknown Dec 2 05:09:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:36.829 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-05109c7b-d482-4449-af19-f4a4bb49c893', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05109c7b-d482-4449-af19-f4a4bb49c893', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=610be6b7-10e7-4876-ab18-6d4030872d9d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cbaad62d-24be-4326-997a-688e88770b3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:36.832 160221 INFO neutron.agent.ovn.metadata.agent [-] Port cbaad62d-24be-4326-997a-688e88770b3c in datapath 05109c7b-d482-4449-af19-f4a4bb49c893 bound to our chassis#033[00m Dec 2 05:09:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:36.836 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port dbd18595-d1e5-46d1-a5da-d0034be89313 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:36.836 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05109c7b-d482-4449-af19-f4a4bb49c893, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:36.837 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[57659bd6-01a8-4489-b2fd-a466450f9425]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost ovn_controller[154505]: 2025-12-02T10:09:36Z|00392|binding|INFO|Setting lport cbaad62d-24be-4326-997a-688e88770b3c ovn-installed in OVS Dec 2 05:09:36 localhost ovn_controller[154505]: 2025-12-02T10:09:36Z|00393|binding|INFO|Setting lport cbaad62d-24be-4326-997a-688e88770b3c up in Southbound Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.852 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:36.853 263406 INFO neutron.agent.dhcp.agent [None req-541c081a-bfd8-411c-90b8-cdd29ccd627f - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost journal[230136]: ethtool ioctl error on tapcbaad62d-24: No such device Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.893 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost nova_compute[281854]: 2025-12-02 10:09:36.916 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:36 localhost podman[322906]: 2025-12-02 10:09:36.948710986 +0000 UTC m=+0.091508759 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:09:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:09:36 localhost podman[322906]: 2025-12-02 10:09:36.984009497 +0000 UTC m=+0.126807210 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:09:36 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:09:37 localhost podman[322963]: 2025-12-02 10:09:37.06292134 +0000 UTC m=+0.076240763 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:37 localhost dnsmasq[322893]: exiting on receipt of SIGTERM Dec 2 05:09:37 localhost podman[322982]: 2025-12-02 10:09:37.082409889 +0000 UTC m=+0.049898961 container kill 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:09:37 localhost systemd[1]: libpod-62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be.scope: Deactivated successfully. Dec 2 05:09:37 localhost podman[322963]: 2025-12-02 10:09:37.098972101 +0000 UTC m=+0.112291534 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller) Dec 2 05:09:37 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:09:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:37.116 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:37 localhost podman[323006]: 2025-12-02 10:09:37.154390068 +0000 UTC m=+0.060444132 container died 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:09:37 localhost podman[323006]: 2025-12-02 10:09:37.182920058 +0000 UTC m=+0.088974042 container cleanup 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:09:37 localhost systemd[1]: libpod-conmon-62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be.scope: Deactivated successfully. Dec 2 05:09:37 localhost podman[323011]: 2025-12-02 10:09:37.231880362 +0000 UTC m=+0.126007039 container remove 62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:09:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:37.478 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:37 localhost systemd[1]: var-lib-containers-storage-overlay-5e045d2b0104541aa45b9552e205f851555854865b20457010235722c9179ccb-merged.mount: Deactivated successfully. Dec 2 05:09:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62afbb3cf5a784195d384305e65cc4cbdf5cb827266ffaa429b3ac48efb8d8be-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:37.649 2 INFO neutron.agent.securitygroups_rpc [None req-fb787287-e6b7-452a-9552-33fb0c49fb57 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']#033[00m Dec 2 05:09:37 localhost podman[323078]: Dec 2 05:09:37 localhost podman[323078]: 2025-12-02 10:09:37.818502846 +0000 UTC m=+0.085451299 container create 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:37 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:37.828 2 INFO neutron.agent.securitygroups_rpc [None req-f57a8374-1238-48d5-81d1-d11d5ba885ce 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:37 localhost systemd[1]: Started libpod-conmon-7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a.scope. Dec 2 05:09:37 localhost systemd[1]: tmp-crun.U5rcLP.mount: Deactivated successfully. Dec 2 05:09:37 localhost podman[323078]: 2025-12-02 10:09:37.779883096 +0000 UTC m=+0.046831619 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:37 localhost systemd[1]: Started libcrun container. Dec 2 05:09:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87ff39b2ecb04dea17c5dd28c2d134e115adfe624f9984f492cbd7de6d99878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:37 localhost podman[323078]: 2025-12-02 10:09:37.912063309 +0000 UTC m=+0.179011762 container init 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:37 localhost podman[323078]: 2025-12-02 10:09:37.918710586 +0000 UTC m=+0.185659039 container start 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:09:37 localhost dnsmasq[323096]: started, version 2.85 cachesize 150 Dec 2 05:09:37 localhost dnsmasq[323096]: DNS service limited to local subnets Dec 2 05:09:37 localhost dnsmasq[323096]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:37 localhost dnsmasq[323096]: warning: no upstream servers configured Dec 2 05:09:37 localhost dnsmasq-dhcp[323096]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:37 localhost dnsmasq[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/addn_hosts - 0 addresses Dec 2 05:09:37 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/host Dec 2 05:09:37 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/opts Dec 2 05:09:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:38.101 263406 INFO neutron.agent.dhcp.agent [None req-b675ed77-47ef-446a-ada0-2884f7eec885 - - - - - -] DHCP configuration for ports {'7c84f980-101d-4cd5-af99-219bdb6dca01'} is completed#033[00m Dec 2 05:09:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:38.152 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:37Z, description=, device_id=71e9a386-3dc5-4933-920f-317312e12047, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0803d891-92bd-4433-91d3-38fa14b1f114, ip_allocation=immediate, mac_address=fa:16:3e:b4:9d:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:33Z, description=, dns_domain=, id=05109c7b-d482-4449-af19-f4a4bb49c893, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1849952638, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26733, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2238, status=ACTIVE, subnets=['6a712875-7f47-461d-b090-4a856246df1e'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:34Z, vlan_transparent=None, network_id=05109c7b-d482-4449-af19-f4a4bb49c893, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2254, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:38Z on network 05109c7b-d482-4449-af19-f4a4bb49c893#033[00m Dec 2 05:09:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:38 localhost dnsmasq[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/addn_hosts - 1 addresses Dec 2 05:09:38 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/host Dec 2 05:09:38 localhost podman[323133]: 2025-12-02 10:09:38.348854999 +0000 UTC m=+0.051175955 container kill 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:38 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/opts Dec 2 05:09:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:38.765 263406 INFO neutron.agent.dhcp.agent [None req-b80f4a0b-081e-44ed-8d38-5d7b00ae9377 - - - - - -] DHCP configuration for ports {'0803d891-92bd-4433-91d3-38fa14b1f114'} is completed#033[00m Dec 2 05:09:38 localhost podman[323181]: Dec 2 05:09:38 localhost podman[323181]: 2025-12-02 10:09:38.816104041 +0000 UTC m=+0.083362743 container create 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:09:38 localhost systemd[1]: Started libpod-conmon-26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0.scope. Dec 2 05:09:38 localhost podman[323181]: 2025-12-02 10:09:38.777571784 +0000 UTC m=+0.044830516 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:38 localhost systemd[1]: Started libcrun container. Dec 2 05:09:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8a360b6b870487f4f87e73a8aa1f4f9953a1bffbc3a531d45090d4e01f5f96a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:38 localhost podman[323181]: 2025-12-02 10:09:38.89900512 +0000 UTC m=+0.166263832 container init 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:09:38 localhost podman[323181]: 2025-12-02 10:09:38.909642863 +0000 UTC m=+0.176901565 container start 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:09:38 localhost dnsmasq[323200]: started, version 2.85 cachesize 150 Dec 2 05:09:38 localhost dnsmasq[323200]: DNS service limited to local subnets Dec 2 05:09:38 localhost dnsmasq[323200]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:38 localhost dnsmasq[323200]: warning: no upstream servers configured Dec 2 05:09:38 localhost dnsmasq-dhcp[323200]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 2 05:09:38 localhost dnsmasq[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:38 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:38 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:38.972 263406 INFO neutron.agent.dhcp.agent [None req-00734e8a-b518-42d7-adc6-79dc4b989d6a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:36Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=20c5b76e-0c80-44cf-aacd-3f2f3640633d, ip_allocation=immediate, mac_address=fa:16:3e:28:87:15, name=tempest-NetworksTestDHCPv6-176597900, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=49, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['be3eb0ce-daea-40cb-853a-f3837f9a82f6', 'e0b78287-f484-4227-959d-ba30d71df44a'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:35Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2251, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:37Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:39 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:39.002 2 INFO neutron.agent.securitygroups_rpc [None req-1ad7ca5c-e344-40e1-8595-888c801ea96b 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:39 localhost dnsmasq[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:09:39 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:39 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:39 localhost podman[323219]: 2025-12-02 10:09:39.16354924 +0000 UTC m=+0.055230893 container kill 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:39.194 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:37Z, description=, device_id=71e9a386-3dc5-4933-920f-317312e12047, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0803d891-92bd-4433-91d3-38fa14b1f114, ip_allocation=immediate, mac_address=fa:16:3e:b4:9d:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:33Z, description=, dns_domain=, id=05109c7b-d482-4449-af19-f4a4bb49c893, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1849952638, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26733, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2238, status=ACTIVE, subnets=['6a712875-7f47-461d-b090-4a856246df1e'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:34Z, vlan_transparent=None, network_id=05109c7b-d482-4449-af19-f4a4bb49c893, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2254, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:38Z on network 05109c7b-d482-4449-af19-f4a4bb49c893#033[00m Dec 2 05:09:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:39.226 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:09:39 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:39.262 2 INFO neutron.agent.securitygroups_rpc [None req-cc3286c0-8479-41a4-833f-f53341ebdf18 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:39.278 263406 INFO neutron.agent.dhcp.agent [None req-7a0f97ce-92b6-48f6-aac2-ab14b5b42e71 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:39.403 263406 INFO neutron.agent.dhcp.agent [None req-0ef5e002-d21d-41ee-9265-970c9927a5b5 - - - - - -] DHCP configuration for ports {'20c5b76e-0c80-44cf-aacd-3f2f3640633d'} is completed#033[00m Dec 2 05:09:39 localhost dnsmasq[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/addn_hosts - 1 addresses Dec 2 05:09:39 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/host Dec 2 05:09:39 localhost podman[323272]: 2025-12-02 10:09:39.453462695 +0000 UTC m=+0.051703678 container kill 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:39 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/opts Dec 2 05:09:39 localhost dnsmasq[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:39 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:39 localhost dnsmasq-dhcp[323200]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:39 localhost podman[323285]: 2025-12-02 10:09:39.507216548 +0000 UTC m=+0.067488449 container kill 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:09:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:39.741 263406 INFO neutron.agent.dhcp.agent [None req-098607c9-980d-4bbb-9369-6adb34c05e75 - - - - - -] DHCP configuration for ports {'0803d891-92bd-4433-91d3-38fa14b1f114'} is completed#033[00m Dec 2 05:09:40 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:40.226 2 INFO neutron.agent.securitygroups_rpc [None req-d726f52f-c5d0-4b2e-935e-07d00a13737f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:40 localhost systemd[1]: tmp-crun.Lxqmdo.mount: Deactivated successfully. Dec 2 05:09:40 localhost dnsmasq[323200]: exiting on receipt of SIGTERM Dec 2 05:09:40 localhost systemd[1]: libpod-26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0.scope: Deactivated successfully. Dec 2 05:09:40 localhost podman[323333]: 2025-12-02 10:09:40.655653213 +0000 UTC m=+0.069139124 container kill 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:09:40 localhost podman[323345]: 2025-12-02 10:09:40.711054789 +0000 UTC m=+0.045680378 container died 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:40 localhost podman[323345]: 2025-12-02 10:09:40.758513933 +0000 UTC m=+0.093139452 container cleanup 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:09:40 localhost systemd[1]: libpod-conmon-26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0.scope: Deactivated successfully. Dec 2 05:09:40 localhost podman[323354]: 2025-12-02 10:09:40.841548417 +0000 UTC m=+0.155288479 container remove 26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:41 localhost nova_compute[281854]: 2025-12-02 10:09:41.014 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:41 localhost nova_compute[281854]: 2025-12-02 10:09:41.072 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:41 localhost dnsmasq[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/addn_hosts - 0 addresses Dec 2 05:09:41 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/host Dec 2 05:09:41 localhost podman[323406]: 2025-12-02 10:09:41.155982756 +0000 UTC m=+0.069584706 container kill 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:09:41 localhost dnsmasq-dhcp[323096]: read /var/lib/neutron/dhcp/05109c7b-d482-4449-af19-f4a4bb49c893/opts Dec 2 05:09:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:41 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:09:41 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:41 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:09:41 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:09:41 localhost nova_compute[281854]: 2025-12-02 10:09:41.446 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:41 localhost ovn_controller[154505]: 2025-12-02T10:09:41Z|00394|binding|INFO|Releasing lport cbaad62d-24be-4326-997a-688e88770b3c from this chassis (sb_readonly=0) Dec 2 05:09:41 localhost kernel: device tapcbaad62d-24 left promiscuous mode Dec 2 05:09:41 localhost ovn_controller[154505]: 2025-12-02T10:09:41Z|00395|binding|INFO|Setting lport cbaad62d-24be-4326-997a-688e88770b3c down in Southbound Dec 2 05:09:41 localhost nova_compute[281854]: 2025-12-02 10:09:41.472 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:41.625 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-05109c7b-d482-4449-af19-f4a4bb49c893', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05109c7b-d482-4449-af19-f4a4bb49c893', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=610be6b7-10e7-4876-ab18-6d4030872d9d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=cbaad62d-24be-4326-997a-688e88770b3c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:41.628 160221 INFO neutron.agent.ovn.metadata.agent [-] Port cbaad62d-24be-4326-997a-688e88770b3c in datapath 05109c7b-d482-4449-af19-f4a4bb49c893 unbound from our chassis#033[00m Dec 2 05:09:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:41.630 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05109c7b-d482-4449-af19-f4a4bb49c893, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:41.631 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a6931e-45da-42a0-a51c-79c9b74eeaad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:41 localhost systemd[1]: var-lib-containers-storage-overlay-f8a360b6b870487f4f87e73a8aa1f4f9953a1bffbc3a531d45090d4e01f5f96a-merged.mount: Deactivated successfully. Dec 2 05:09:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26cbe6a9c45b23ed1f48daaadc1b5d59d7c5d5159e0dbc7b04bb373d602e27a0-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:41 localhost podman[323466]: Dec 2 05:09:41 localhost podman[323466]: 2025-12-02 10:09:41.840692773 +0000 UTC m=+0.093257127 container create f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:09:41 localhost systemd[1]: Started libpod-conmon-f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720.scope. Dec 2 05:09:41 localhost podman[323466]: 2025-12-02 10:09:41.794868781 +0000 UTC m=+0.047433175 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:41 localhost systemd[1]: Started libcrun container. Dec 2 05:09:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06ba67381f0b9b7877d863f3ab5fbc0fd8196e0bb57eb66befc8af43a84552e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:41 localhost podman[323466]: 2025-12-02 10:09:41.909937908 +0000 UTC m=+0.162502262 container init f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:41 localhost podman[323466]: 2025-12-02 10:09:41.920396567 +0000 UTC m=+0.172960931 container start f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:09:41 localhost dnsmasq[323485]: started, version 2.85 cachesize 150 Dec 2 05:09:41 localhost dnsmasq[323485]: DNS service limited to local subnets Dec 2 05:09:41 localhost dnsmasq[323485]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:41 localhost dnsmasq[323485]: warning: no upstream servers configured Dec 2 05:09:41 localhost dnsmasq[323485]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:42.204 263406 INFO neutron.agent.dhcp.agent [None req-50462b5c-5931-4a56-ba65-733f2870f6f1 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:42 localhost dnsmasq[323485]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:42 localhost podman[323503]: 2025-12-02 10:09:42.302974993 +0000 UTC m=+0.048371481 container kill f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:42.708 263406 INFO neutron.agent.dhcp.agent [None req-905300cb-4c0c-448f-89c3-f221575adcef - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:42 localhost podman[323541]: 2025-12-02 10:09:42.841295928 +0000 UTC m=+0.069574995 container kill f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:09:42 localhost dnsmasq[323485]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:43 localhost dnsmasq[323096]: exiting on receipt of SIGTERM Dec 2 05:09:43 localhost systemd[1]: tmp-crun.0fXi5Z.mount: Deactivated successfully. Dec 2 05:09:43 localhost podman[323577]: 2025-12-02 10:09:43.073546018 +0000 UTC m=+0.051811833 container kill 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:43 localhost systemd[1]: libpod-7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a.scope: Deactivated successfully. Dec 2 05:09:43 localhost podman[323591]: 2025-12-02 10:09:43.122321437 +0000 UTC m=+0.038323943 container died 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:43 localhost podman[323591]: 2025-12-02 10:09:43.151553707 +0000 UTC m=+0.067556213 container cleanup 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:09:43 localhost systemd[1]: libpod-conmon-7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a.scope: Deactivated successfully. Dec 2 05:09:43 localhost podman[323593]: 2025-12-02 10:09:43.209177632 +0000 UTC m=+0.118122809 container remove 7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-05109c7b-d482-4449-af19-f4a4bb49c893, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:09:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:43.292 263406 INFO neutron.agent.dhcp.agent [None req-ef93004c-3c39-406f-bf32-a724b707d8de - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:43.299 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:43 localhost systemd[1]: var-lib-containers-storage-overlay-a87ff39b2ecb04dea17c5dd28c2d134e115adfe624f9984f492cbd7de6d99878-merged.mount: Deactivated successfully. Dec 2 05:09:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c35db7fe2764775d51d30f075b938e893c95ba1b691a66fc72c0d39e338b55a-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:43 localhost systemd[1]: run-netns-qdhcp\x2d05109c7b\x2dd482\x2d4449\x2daf19\x2df4a4bb49c893.mount: Deactivated successfully. Dec 2 05:09:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:43.679 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:43 localhost ovn_controller[154505]: 2025-12-02T10:09:43Z|00396|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:09:43 localhost dnsmasq[323485]: exiting on receipt of SIGTERM Dec 2 05:09:43 localhost systemd[1]: libpod-f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720.scope: Deactivated successfully. Dec 2 05:09:43 localhost podman[323638]: 2025-12-02 10:09:43.988231563 +0000 UTC m=+0.066330168 container kill f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:09:44 localhost nova_compute[281854]: 2025-12-02 10:09:44.029 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:44 localhost podman[323653]: 2025-12-02 10:09:44.109865315 +0000 UTC m=+0.061022838 container died f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:44 localhost systemd[1]: tmp-crun.RJnPdc.mount: Deactivated successfully. Dec 2 05:09:44 localhost podman[323653]: 2025-12-02 10:09:44.160518224 +0000 UTC m=+0.111675727 container remove f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:44 localhost systemd[1]: libpod-conmon-f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720.scope: Deactivated successfully. Dec 2 05:09:44 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:44 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:44 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:44.616 2 INFO neutron.agent.securitygroups_rpc [None req-fbbd4af2-250f-4ff2-b3ab-e75b109a47fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:44 localhost systemd[1]: var-lib-containers-storage-overlay-06ba67381f0b9b7877d863f3ab5fbc0fd8196e0bb57eb66befc8af43a84552e6-merged.mount: Deactivated successfully. Dec 2 05:09:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7d73300a9323e51821c20b4cb037d2740df83ae267c170b5d076fff552cc720-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:44 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:44.680 2 INFO neutron.agent.securitygroups_rpc [None req-26916261-820c-405a-8570-4b6047e10a3c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:45.721 263406 INFO neutron.agent.linux.ip_lib [None req-770421f6-3f2e-4c35-98e2-e6c251798a4e - - - - - -] Device tap71626d7c-b9 cannot be used as it has no MAC address#033[00m Dec 2 05:09:45 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:45.751 2 INFO neutron.agent.securitygroups_rpc [None req-5632dc43-e5b5-45de-a516-10b988e48fe8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.750 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost kernel: device tap71626d7c-b9 entered promiscuous mode Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.756 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost NetworkManager[5965]: [1764670185.7579] manager: (tap71626d7c-b9): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Dec 2 05:09:45 localhost ovn_controller[154505]: 2025-12-02T10:09:45Z|00397|binding|INFO|Claiming lport 71626d7c-b9fe-49b7-966b-658b52a6b534 for this chassis. Dec 2 05:09:45 localhost ovn_controller[154505]: 2025-12-02T10:09:45Z|00398|binding|INFO|71626d7c-b9fe-49b7-966b-658b52a6b534: Claiming unknown Dec 2 05:09:45 localhost systemd-udevd[323716]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:45.774 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-1ff380aa-d975-48ea-aada-9148640d9136', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ff380aa-d975-48ea-aada-9148640d9136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aae5e2dae10d49c38d5d63835c7677e3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f283dbb1-0190-4442-8c26-ed3f8c0bba35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=71626d7c-b9fe-49b7-966b-658b52a6b534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:45.776 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 71626d7c-b9fe-49b7-966b-658b52a6b534 in datapath 1ff380aa-d975-48ea-aada-9148640d9136 bound to our chassis#033[00m Dec 2 05:09:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:45.778 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1ff380aa-d975-48ea-aada-9148640d9136 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:45 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:45.778 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[22e7a3d1-2a7f-4b26-b843-853a8a6a21af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.794 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:45.798 2 INFO neutron.agent.securitygroups_rpc [None req-50568852-e227-40d9-a94b-d9d972f0134a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.807 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost ovn_controller[154505]: 2025-12-02T10:09:45Z|00399|binding|INFO|Setting lport 71626d7c-b9fe-49b7-966b-658b52a6b534 ovn-installed in OVS Dec 2 05:09:45 localhost ovn_controller[154505]: 2025-12-02T10:09:45Z|00400|binding|INFO|Setting lport 71626d7c-b9fe-49b7-966b-658b52a6b534 up in Southbound Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.808 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost journal[230136]: ethtool ioctl error on tap71626d7c-b9: No such device Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.845 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:45 localhost nova_compute[281854]: 2025-12-02 10:09:45.876 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:46 localhost nova_compute[281854]: 2025-12-02 10:09:46.020 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:46 localhost podman[323766]: Dec 2 05:09:46 localhost podman[323766]: 2025-12-02 10:09:46.049208985 +0000 UTC m=+0.104036433 container create a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:46 localhost nova_compute[281854]: 2025-12-02 10:09:46.074 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:46 localhost podman[323766]: 2025-12-02 10:09:45.994419095 +0000 UTC m=+0.049246583 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:46 localhost systemd[1]: Started libpod-conmon-a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515.scope. Dec 2 05:09:46 localhost systemd[1]: tmp-crun.Rnv80Q.mount: Deactivated successfully. Dec 2 05:09:46 localhost systemd[1]: Started libcrun container. Dec 2 05:09:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71f2964b61897d8f2914a8e03c9b6a8f005672548d86055f884c131b4bf1b6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:46 localhost podman[323766]: 2025-12-02 10:09:46.164997081 +0000 UTC m=+0.219824519 container init a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:09:46 localhost podman[323766]: 2025-12-02 10:09:46.175909022 +0000 UTC m=+0.230736470 container start a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:09:46 localhost dnsmasq[323791]: started, version 2.85 cachesize 150 Dec 2 05:09:46 localhost dnsmasq[323791]: DNS service limited to local subnets Dec 2 05:09:46 localhost dnsmasq[323791]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:46 localhost dnsmasq[323791]: warning: no upstream servers configured Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 2 05:09:46 localhost dnsmasq[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:46.250 263406 INFO neutron.agent.dhcp.agent [None req-061472a1-7ad7-445e-a4cc-30ebd0a6ca91 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:43Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=01a2d6dd-8ce8-48bd-93c0-ba2e2c26cdf6, ip_allocation=immediate, mac_address=fa:16:3e:dc:c3:b7, name=tempest-NetworksTestDHCPv6-1006860435, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=53, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['1234e32b-b572-4333-ae69-9076e6f1997e', '1fb1b810-2d2d-4940-8cad-41ff695eec18'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:43Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2274, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:43Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:46 localhost dnsmasq[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:46 localhost dnsmasq-dhcp[323791]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:46 localhost podman[323816]: 2025-12-02 10:09:46.441329995 +0000 UTC m=+0.055295695 container kill a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:09:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:46.455 263406 INFO neutron.agent.dhcp.agent [None req-f015b6be-42f5-411a-b629-df358d78ea55 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:46.679 263406 INFO neutron.agent.dhcp.agent [None req-4e612800-7418-428f-a75f-5888b299f387 - - - - - -] DHCP configuration for ports {'01a2d6dd-8ce8-48bd-93c0-ba2e2c26cdf6'} is completed#033[00m Dec 2 05:09:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:46.703 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 35e0944d-0cc9-46d5-b463-8e827905e9f6 with type ""#033[00m Dec 2 05:09:46 localhost ovn_controller[154505]: 2025-12-02T10:09:46Z|00401|binding|INFO|Removing iface tap71626d7c-b9 ovn-installed in OVS Dec 2 05:09:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:46.705 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-1ff380aa-d975-48ea-aada-9148640d9136', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ff380aa-d975-48ea-aada-9148640d9136', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aae5e2dae10d49c38d5d63835c7677e3', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f283dbb1-0190-4442-8c26-ed3f8c0bba35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=71626d7c-b9fe-49b7-966b-658b52a6b534) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:46 localhost ovn_controller[154505]: 2025-12-02T10:09:46Z|00402|binding|INFO|Removing lport 71626d7c-b9fe-49b7-966b-658b52a6b534 ovn-installed in OVS Dec 2 05:09:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:46.707 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 71626d7c-b9fe-49b7-966b-658b52a6b534 in datapath 1ff380aa-d975-48ea-aada-9148640d9136 unbound from our chassis#033[00m Dec 2 05:09:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:46.710 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1ff380aa-d975-48ea-aada-9148640d9136 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:46.711 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c6194c-8fd9-44ce-ad9a-07d39f439a33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:46 localhost nova_compute[281854]: 2025-12-02 10:09:46.758 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:46 localhost dnsmasq[323791]: exiting on receipt of SIGTERM Dec 2 05:09:46 localhost podman[323887]: 2025-12-02 10:09:46.916410865 +0000 UTC m=+0.064164280 container kill a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:46 localhost systemd[1]: libpod-a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515.scope: Deactivated successfully. Dec 2 05:09:46 localhost podman[323877]: Dec 2 05:09:46 localhost podman[323877]: 2025-12-02 10:09:46.928930399 +0000 UTC m=+0.105824931 container create 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:09:46 localhost systemd[1]: Started libpod-conmon-22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044.scope. Dec 2 05:09:46 localhost systemd[1]: Started libcrun container. Dec 2 05:09:46 localhost podman[323877]: 2025-12-02 10:09:46.878727611 +0000 UTC m=+0.055622193 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91a0a58d34bd39dcc08a9cbec3014076ed043bc85e9c4b743578420c1d1c1a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:46 localhost podman[323877]: 2025-12-02 10:09:46.998678088 +0000 UTC m=+0.175572630 container init 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:09:47 localhost podman[323877]: 2025-12-02 10:09:47.00815195 +0000 UTC m=+0.185046462 container start 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:47 localhost dnsmasq[323935]: started, version 2.85 cachesize 150 Dec 2 05:09:47 localhost dnsmasq[323935]: DNS service limited to local subnets Dec 2 05:09:47 localhost dnsmasq[323935]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:47 localhost dnsmasq[323935]: warning: no upstream servers configured Dec 2 05:09:47 localhost dnsmasq-dhcp[323935]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:47 localhost dnsmasq[323935]: read /var/lib/neutron/dhcp/1ff380aa-d975-48ea-aada-9148640d9136/addn_hosts - 0 addresses Dec 2 05:09:47 localhost dnsmasq-dhcp[323935]: read /var/lib/neutron/dhcp/1ff380aa-d975-48ea-aada-9148640d9136/host Dec 2 05:09:47 localhost dnsmasq-dhcp[323935]: read /var/lib/neutron/dhcp/1ff380aa-d975-48ea-aada-9148640d9136/opts Dec 2 05:09:47 localhost ovn_controller[154505]: 2025-12-02T10:09:47Z|00403|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:09:47 localhost podman[323904]: 2025-12-02 10:09:47.0531721 +0000 UTC m=+0.117305748 container died a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.059 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:47 localhost podman[323904]: 2025-12-02 10:09:47.08505909 +0000 UTC m=+0.149192708 container cleanup a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:09:47 localhost systemd[1]: libpod-conmon-a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515.scope: Deactivated successfully. Dec 2 05:09:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:47.125 263406 INFO neutron.agent.dhcp.agent [None req-9b0f8d69-dce7-4607-abc9-0f8fd56d2265 - - - - - -] DHCP configuration for ports {'e213d2f6-a81e-4742-91b5-9e4adb2a81c7'} is completed#033[00m Dec 2 05:09:47 localhost podman[323906]: 2025-12-02 10:09:47.134536808 +0000 UTC m=+0.191556196 container remove a60738a1582b279a04499731949681e07ae5f4163407375a94b2bf434e0a3515 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:47 localhost dnsmasq[323935]: exiting on receipt of SIGTERM Dec 2 05:09:47 localhost podman[323958]: 2025-12-02 10:09:47.253152939 +0000 UTC m=+0.062144676 container kill 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:47 localhost systemd[1]: libpod-22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044.scope: Deactivated successfully. Dec 2 05:09:47 localhost podman[323971]: 2025-12-02 10:09:47.315439119 +0000 UTC m=+0.048499403 container died 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:47 localhost podman[323971]: 2025-12-02 10:09:47.395305327 +0000 UTC m=+0.128365561 container cleanup 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:47 localhost systemd[1]: libpod-conmon-22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044.scope: Deactivated successfully. Dec 2 05:09:47 localhost podman[323973]: 2025-12-02 10:09:47.417371145 +0000 UTC m=+0.141521082 container remove 22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ff380aa-d975-48ea-aada-9148640d9136, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:47 localhost kernel: device tap71626d7c-b9 left promiscuous mode Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.428 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.440 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:47.466 263406 INFO neutron.agent.dhcp.agent [None req-c3ff87fb-0d04-4261-bc95-972a4420b0ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:47.467 263406 INFO neutron.agent.dhcp.agent [None req-c3ff87fb-0d04-4261-bc95-972a4420b0ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:47.467 263406 INFO neutron.agent.dhcp.agent [None req-c3ff87fb-0d04-4261-bc95-972a4420b0ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:47 localhost ovn_controller[154505]: 2025-12-02T10:09:47Z|00404|binding|INFO|Releasing lport 0845b737-de43-4aef-bed0-c9dd0310ccc7 from this chassis (sb_readonly=0) Dec 2 05:09:47 localhost kernel: device tap0845b737-de left promiscuous mode Dec 2 05:09:47 localhost ovn_controller[154505]: 2025-12-02T10:09:47Z|00405|binding|INFO|Setting lport 0845b737-de43-4aef-bed0-c9dd0310ccc7 down in Southbound Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.484 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.500 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost nova_compute[281854]: 2025-12-02 10:09:47.502 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:47.510 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::f816:3eff:fef2:2261/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '8', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0845b737-de43-4aef-bed0-c9dd0310ccc7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:47.512 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0845b737-de43-4aef-bed0-c9dd0310ccc7 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:47.514 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:47.515 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[756864ba-a8f9-453b-a232-692e09fd918d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:47.553 263406 INFO neutron.agent.dhcp.agent [None req-9a676760-0409-4231-9eff-6533d601fa1e - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '0845b737-de43-4aef-bed0-c9dd0310ccc7'} is completed#033[00m Dec 2 05:09:48 localhost systemd[1]: var-lib-containers-storage-overlay-d91a0a58d34bd39dcc08a9cbec3014076ed043bc85e9c4b743578420c1d1c1a5-merged.mount: Deactivated successfully. Dec 2 05:09:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22d4491abb2099d1a4e03c8c4b14c710b2ffdbfc86fefd09e19d61438cac3044-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:48 localhost systemd[1]: var-lib-containers-storage-overlay-a71f2964b61897d8f2914a8e03c9b6a8f005672548d86055f884c131b4bf1b6c-merged.mount: Deactivated successfully. Dec 2 05:09:48 localhost systemd[1]: run-netns-qdhcp\x2d1ff380aa\x2dd975\x2d48ea\x2daada\x2d9148640d9136.mount: Deactivated successfully. Dec 2 05:09:48 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:09:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:48.547 263406 INFO neutron.agent.linux.ip_lib [None req-16efee1f-1beb-4ba5-a1a9-346a37ad8668 - - - - - -] Device tap92c576f1-7f cannot be used as it has no MAC address#033[00m Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.575 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost kernel: device tap92c576f1-7f entered promiscuous mode Dec 2 05:09:48 localhost NetworkManager[5965]: [1764670188.5831] manager: (tap92c576f1-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Dec 2 05:09:48 localhost systemd-udevd[323718]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost ovn_controller[154505]: 2025-12-02T10:09:48Z|00406|binding|INFO|Claiming lport 92c576f1-7fec-4bf8-bac6-a5142e33525f for this chassis. Dec 2 05:09:48 localhost ovn_controller[154505]: 2025-12-02T10:09:48Z|00407|binding|INFO|92c576f1-7fec-4bf8-bac6-a5142e33525f: Claiming unknown Dec 2 05:09:48 localhost ovn_controller[154505]: 2025-12-02T10:09:48Z|00408|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f ovn-installed in OVS Dec 2 05:09:48 localhost ovn_controller[154505]: 2025-12-02T10:09:48Z|00409|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f up in Southbound Dec 2 05:09:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:48.595 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92c576f1-7fec-4bf8-bac6-a5142e33525f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:48.601 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 92c576f1-7fec-4bf8-bac6-a5142e33525f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.601 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:48.603 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 13eaf550-7fb7-4bf7-a22d-4be7f3776b15 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:48.604 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:48 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:48.605 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[38c01e5d-54c4-4100-96fd-1b16e194ff32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.623 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.661 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost nova_compute[281854]: 2025-12-02 10:09:48.692 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:48 localhost podman[324014]: 2025-12-02 10:09:48.714719088 +0000 UTC m=+0.085444587 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 05:09:48 localhost podman[324014]: 2025-12-02 10:09:48.721179881 +0000 UTC m=+0.091905340 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:09:48 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:09:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:49.245 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:09:49 localhost podman[324083]: Dec 2 05:09:49 localhost podman[324083]: 2025-12-02 10:09:49.525382472 +0000 UTC m=+0.096495693 container create 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 2 05:09:49 localhost systemd[1]: Started libpod-conmon-2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339.scope. Dec 2 05:09:49 localhost podman[324083]: 2025-12-02 10:09:49.478945004 +0000 UTC m=+0.050058245 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:49 localhost systemd[1]: Started libcrun container. Dec 2 05:09:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd5e0da3a9274120093985a24a538856f9ad9edfb441f144c923f70d7f4a7a93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:49 localhost podman[324083]: 2025-12-02 10:09:49.60033512 +0000 UTC m=+0.171448311 container init 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:09:49 localhost podman[324083]: 2025-12-02 10:09:49.606824772 +0000 UTC m=+0.177937973 container start 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:09:49 localhost dnsmasq[324102]: started, version 2.85 cachesize 150 Dec 2 05:09:49 localhost dnsmasq[324102]: DNS service limited to local subnets Dec 2 05:09:49 localhost dnsmasq[324102]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:49 localhost dnsmasq[324102]: warning: no upstream servers configured Dec 2 05:09:49 localhost dnsmasq-dhcp[324102]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:49 localhost dnsmasq[324102]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:49 localhost dnsmasq-dhcp[324102]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:49 localhost dnsmasq-dhcp[324102]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:49 localhost ovn_controller[154505]: 2025-12-02T10:09:49Z|00410|binding|INFO|Releasing lport 92c576f1-7fec-4bf8-bac6-a5142e33525f from this chassis (sb_readonly=0) Dec 2 05:09:49 localhost kernel: device tap92c576f1-7f left promiscuous mode Dec 2 05:09:49 localhost ovn_controller[154505]: 2025-12-02T10:09:49Z|00411|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f down in Southbound Dec 2 05:09:49 localhost nova_compute[281854]: 2025-12-02 10:09:49.690 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:49.697 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92c576f1-7fec-4bf8-bac6-a5142e33525f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:49.698 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 92c576f1-7fec-4bf8-bac6-a5142e33525f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:49.699 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:49.700 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ef650d9a-d2d1-4507-be80-9513ea15722b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:49 localhost nova_compute[281854]: 2025-12-02 10:09:49.715 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:49.730 263406 INFO neutron.agent.dhcp.agent [None req-817c9eb6-03c8-4368-8c16-46eb6b30f438 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:09:51 localhost nova_compute[281854]: 2025-12-02 10:09:51.020 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:51 localhost nova_compute[281854]: 2025-12-02 10:09:51.075 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:51.477 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8:0:1:f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:51.479 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:51.481 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:51.482 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9b1125-40d6-45e1-ad1e-31267c7212af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:51 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:51 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:51 localhost nova_compute[281854]: 2025-12-02 10:09:51.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:09:51 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:51.998 263406 INFO neutron.agent.linux.ip_lib [None req-3703d572-c5ea-43bd-881a-ea84f051eddb - - - - - -] Device tapb35a7019-cd cannot be used as it has no MAC address#033[00m Dec 2 05:09:52 localhost dnsmasq[324102]: exiting on receipt of SIGTERM Dec 2 05:09:52 localhost systemd[1]: libpod-2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339.scope: Deactivated successfully. Dec 2 05:09:52 localhost podman[324130]: 2025-12-02 10:09:52.008894105 +0000 UTC m=+0.078139534 container kill 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.023 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost kernel: device tapb35a7019-cd entered promiscuous mode Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.031 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost NetworkManager[5965]: [1764670192.0313] manager: (tapb35a7019-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/66) Dec 2 05:09:52 localhost systemd-udevd[324170]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00412|binding|INFO|Claiming lport b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 for this chassis. Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00413|binding|INFO|b35a7019-cd13-49ba-ae0b-aa70d3ce3b27: Claiming unknown Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.045 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-5f48cce7-247c-4b5d-8287-ac14f7453254', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f48cce7-247c-4b5d-8287-ac14f7453254', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bad680c763640dba71a7865b355817c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=beadeea7-0616-4ea7-b4f9-7f4239a4c055, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b35a7019-cd13-49ba-ae0b-aa70d3ce3b27) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.047 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 in datapath 5f48cce7-247c-4b5d-8287-ac14f7453254 bound to our chassis#033[00m Dec 2 05:09:52 localhost podman[324122]: 2025-12-02 10:09:51.99707956 +0000 UTC m=+0.081518043 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.052 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6d45c4c4-f9a4-4f81-8d66-a84620e4f8a7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.052 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f48cce7-247c-4b5d-8287-ac14f7453254, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.053 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[54de9c86-fa9e-43fa-8071-1a8a3441794c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00414|binding|INFO|Setting lport b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 ovn-installed in OVS Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00415|binding|INFO|Setting lport b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 up in Southbound Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.081 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost podman[324122]: 2025-12-02 10:09:52.083075992 +0000 UTC m=+0.167514455 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125) Dec 2 05:09:52 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:09:52 localhost podman[324169]: 2025-12-02 10:09:52.114761856 +0000 UTC m=+0.059103266 container died 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.130 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.166 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost podman[324169]: 2025-12-02 10:09:52.205948377 +0000 UTC m=+0.150289787 container remove 2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:52 localhost systemd[1]: libpod-conmon-2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339.scope: Deactivated successfully. Dec 2 05:09:52 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:52.278 263406 INFO neutron.agent.linux.ip_lib [None req-92afd4b8-08b5-47b8-b188-6b3ae2f822da - - - - - -] Device tap92c576f1-7f cannot be used as it has no MAC address#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.294 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost kernel: device tap92c576f1-7f entered promiscuous mode Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.298 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost NetworkManager[5965]: [1764670192.2984] manager: (tap92c576f1-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/67) Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00416|binding|INFO|Claiming lport 92c576f1-7fec-4bf8-bac6-a5142e33525f for this chassis. Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00417|binding|INFO|92c576f1-7fec-4bf8-bac6-a5142e33525f: Claiming unknown Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00418|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f ovn-installed in OVS Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.303 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.305 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.333 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.363 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost nova_compute[281854]: 2025-12-02 10:09:52.378 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.412 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fed5:4cac/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92c576f1-7fec-4bf8-bac6-a5142e33525f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:52 localhost ovn_controller[154505]: 2025-12-02T10:09:52Z|00419|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f up in Southbound Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.413 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 92c576f1-7fec-4bf8-bac6-a5142e33525f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.415 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 13eaf550-7fb7-4bf7-a22d-4be7f3776b15 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.416 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:52 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:52.416 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d21532d6-968d-47bc-8c89-bc428e99735c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:52 localhost systemd[1]: var-lib-containers-storage-overlay-dd5e0da3a9274120093985a24a538856f9ad9edfb441f144c923f70d7f4a7a93-merged.mount: Deactivated successfully. Dec 2 05:09:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d078d0579b19647e101536e938f8941c1375245736bc9718b9a57e8363a4339-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:52 localhost podman[324280]: Dec 2 05:09:53 localhost podman[324280]: 2025-12-02 10:09:53.009508291 +0000 UTC m=+0.081193335 container create 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:09:53 localhost podman[324280]: 2025-12-02 10:09:52.961519351 +0000 UTC m=+0.033204435 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:53 localhost systemd[1]: Started libpod-conmon-9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f.scope. Dec 2 05:09:53 localhost systemd[1]: Started libcrun container. Dec 2 05:09:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c861e4f12c023b4413be9bdb00f291501aad1d68a54fc61728f90b4aa4245df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:53 localhost podman[324280]: 2025-12-02 10:09:53.085070694 +0000 UTC m=+0.156755738 container init 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:53 localhost podman[324280]: 2025-12-02 10:09:53.094104175 +0000 UTC m=+0.165789189 container start 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:09:53 localhost dnsmasq[324316]: started, version 2.85 cachesize 150 Dec 2 05:09:53 localhost dnsmasq[324316]: DNS service limited to local subnets Dec 2 05:09:53 localhost dnsmasq[324316]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:53 localhost dnsmasq[324316]: warning: no upstream servers configured Dec 2 05:09:53 localhost dnsmasq-dhcp[324316]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:53 localhost dnsmasq[324316]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/addn_hosts - 0 addresses Dec 2 05:09:53 localhost dnsmasq-dhcp[324316]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/host Dec 2 05:09:53 localhost dnsmasq-dhcp[324316]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/opts Dec 2 05:09:53 localhost dnsmasq[324316]: exiting on receipt of SIGTERM Dec 2 05:09:53 localhost systemd[1]: libpod-9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f.scope: Deactivated successfully. Dec 2 05:09:53 localhost podman[324323]: 2025-12-02 10:09:53.181796372 +0000 UTC m=+0.061856880 container died 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:09:53 localhost podman[324323]: 2025-12-02 10:09:53.214800111 +0000 UTC m=+0.094860599 container cleanup 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:09:53 localhost podman[324335]: 2025-12-02 10:09:53.240183708 +0000 UTC m=+0.059477036 container cleanup 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:53.244 2 INFO neutron.agent.securitygroups_rpc [None req-2a02d4a7-eedb-47f7-975e-8a697d665d71 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:53 localhost systemd[1]: libpod-conmon-9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f.scope: Deactivated successfully. Dec 2 05:09:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:53.304 2 INFO neutron.agent.securitygroups_rpc [None req-384a8cd4-c502-4296-9a0a-cda4da9440fe 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:53 localhost podman[324350]: 2025-12-02 10:09:53.311852437 +0000 UTC m=+0.075275606 container remove 9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:53.313 263406 INFO neutron.agent.dhcp.agent [None req-9d31655c-bfc4-45b2-b15e-b46b658fb8d4 - - - - - -] DHCP configuration for ports {'aa0c09f4-e09c-448e-8cfa-94b6199246a9'} is completed#033[00m Dec 2 05:09:53 localhost podman[324368]: Dec 2 05:09:53 localhost podman[324368]: 2025-12-02 10:09:53.402308218 +0000 UTC m=+0.076291664 container create 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:09:53 localhost systemd[1]: Started libpod-conmon-79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b.scope. Dec 2 05:09:53 localhost systemd[1]: Started libcrun container. Dec 2 05:09:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b078218d8ff167202cb16b55a3c321df829dc784041145cd6eb709858f6ad54d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:53 localhost podman[324368]: 2025-12-02 10:09:53.358323176 +0000 UTC m=+0.032306692 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:53 localhost podman[324368]: 2025-12-02 10:09:53.458437744 +0000 UTC m=+0.132421180 container init 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 2 05:09:53 localhost podman[324368]: 2025-12-02 10:09:53.468639945 +0000 UTC m=+0.142623381 container start 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:09:53 localhost dnsmasq[324387]: started, version 2.85 cachesize 150 Dec 2 05:09:53 localhost dnsmasq[324387]: DNS service limited to local subnets Dec 2 05:09:53 localhost dnsmasq[324387]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:53 localhost dnsmasq[324387]: warning: no upstream servers configured Dec 2 05:09:53 localhost dnsmasq-dhcp[324387]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:09:53 localhost dnsmasq[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:53 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:53 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:53.513 263406 INFO neutron.agent.dhcp.agent [None req-92afd4b8-08b5-47b8-b188-6b3ae2f822da - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:52Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=8dce7b35-4d0d-431f-9d67-447f953f0572, ip_allocation=immediate, mac_address=fa:16:3e:53:62:1e, name=tempest-NetworksTestDHCPv6-568989, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=57, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['daf22fec-c798-4dca-81bd-963fd98c882c', 'f425644c-747f-4698-a225-0d467296fbc7'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:49Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2344, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:52Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:09:53 localhost dnsmasq[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:09:53 localhost podman[324406]: 2025-12-02 10:09:53.680228285 +0000 UTC m=+0.060185496 container kill 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:09:53 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:53 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:53.814 263406 INFO neutron.agent.dhcp.agent [None req-4f9397d6-fafc-476f-99f1-4de003c9b3ac - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '92c576f1-7fec-4bf8-bac6-a5142e33525f'} is completed#033[00m Dec 2 05:09:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:09:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:09:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:53.870 2 INFO neutron.agent.securitygroups_rpc [None req-552eb951-c19a-4f29-a133-451809159dee 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:53 localhost systemd[1]: var-lib-containers-storage-overlay-7c861e4f12c023b4413be9bdb00f291501aad1d68a54fc61728f90b4aa4245df-merged.mount: Deactivated successfully. Dec 2 05:09:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c73790160d60417a1f35139c8e30d45abddb173af456abed2dccc0b2c16973f-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:53.952 2 INFO neutron.agent.securitygroups_rpc [None req-793d7f6f-bcaa-4aba-a1ec-f239eb834fe6 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:53 localhost systemd[1]: tmp-crun.26EMWx.mount: Deactivated successfully. Dec 2 05:09:53 localhost podman[324428]: 2025-12-02 10:09:53.966480063 +0000 UTC m=+0.102545353 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:09:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:53.975 263406 INFO neutron.agent.dhcp.agent [None req-bc22bd2e-6d7c-4680-8562-b571d99bfecc - - - - - -] DHCP configuration for ports {'8dce7b35-4d0d-431f-9d67-447f953f0572'} is completed#033[00m Dec 2 05:09:54 localhost podman[324429]: 2025-12-02 10:09:54.014068171 +0000 UTC m=+0.147637646 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:09:54 localhost podman[324429]: 2025-12-02 10:09:54.020825532 +0000 UTC m=+0.154395007 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:09:54 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:09:54 localhost podman[324428]: 2025-12-02 10:09:54.036970341 +0000 UTC m=+0.173035631 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:09:54 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:09:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:54.117 263406 INFO neutron.agent.linux.ip_lib [None req-70e9519e-ce9b-440a-b4cf-04deff565c9c - - - - - -] Device tapb822caec-0f cannot be used as it has no MAC address#033[00m Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.144 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost kernel: device tapb822caec-0f entered promiscuous mode Dec 2 05:09:54 localhost NetworkManager[5965]: [1764670194.1496] manager: (tapb822caec-0f): new Generic device (/org/freedesktop/NetworkManager/Devices/68) Dec 2 05:09:54 localhost ovn_controller[154505]: 2025-12-02T10:09:54Z|00420|binding|INFO|Claiming lport b822caec-0ff2-45ee-8f19-f63f7afb253f for this chassis. Dec 2 05:09:54 localhost ovn_controller[154505]: 2025-12-02T10:09:54Z|00421|binding|INFO|b822caec-0ff2-45ee-8f19-f63f7afb253f: Claiming unknown Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.152 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:54.164 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-5198bb66-dd27-48f3-9334-ab53b7335bc8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5198bb66-dd27-48f3-9334-ab53b7335bc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e4dc3a6-b635-4672-b1fb-6fb25996b32e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b822caec-0ff2-45ee-8f19-f63f7afb253f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:54.167 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b822caec-0ff2-45ee-8f19-f63f7afb253f in datapath 5198bb66-dd27-48f3-9334-ab53b7335bc8 bound to our chassis#033[00m Dec 2 05:09:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:54.168 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5198bb66-dd27-48f3-9334-ab53b7335bc8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:09:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:54.169 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[aeab8ae4-840f-48c0-8995-378a8668dc1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost ovn_controller[154505]: 2025-12-02T10:09:54Z|00422|binding|INFO|Setting lport b822caec-0ff2-45ee-8f19-f63f7afb253f ovn-installed in OVS Dec 2 05:09:54 localhost ovn_controller[154505]: 2025-12-02T10:09:54Z|00423|binding|INFO|Setting lport b822caec-0ff2-45ee-8f19-f63f7afb253f up in Southbound Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.186 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost journal[230136]: ethtool ioctl error on tapb822caec-0f: No such device Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.228 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:54.534 2 INFO neutron.agent.securitygroups_rpc [None req-ecc73d10-d9a3-477f-859a-88e3d0a4a336 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:09:54 localhost nova_compute[281854]: 2025-12-02 10:09:54.568 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:54 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:09:54 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:54 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:09:54 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:09:54 localhost dnsmasq[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:54 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:09:54 localhost dnsmasq-dhcp[324387]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:09:54 localhost podman[324540]: 2025-12-02 10:09:54.758181261 +0000 UTC m=+0.059322402 container kill 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:55.009 2 INFO neutron.agent.securitygroups_rpc [None req-c4292fab-d4f2-45ec-8373-3372677610e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:55 localhost podman[324586]: Dec 2 05:09:55 localhost podman[324586]: 2025-12-02 10:09:55.158104558 +0000 UTC m=+0.092459375 container create c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:09:55 localhost systemd[1]: Started libpod-conmon-c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5.scope. Dec 2 05:09:55 localhost podman[324586]: 2025-12-02 10:09:55.111885577 +0000 UTC m=+0.046240464 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:55 localhost systemd[1]: Started libcrun container. Dec 2 05:09:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1c0a1d963187de3e3c282bd1aa2fa59ad7e13adf0d6aa09786d03fa9f921cd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:55 localhost podman[324586]: 2025-12-02 10:09:55.23244382 +0000 UTC m=+0.166798637 container init c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:55 localhost podman[324586]: 2025-12-02 10:09:55.241650775 +0000 UTC m=+0.176005592 container start c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:55 localhost dnsmasq[324604]: started, version 2.85 cachesize 150 Dec 2 05:09:55 localhost dnsmasq[324604]: DNS service limited to local subnets Dec 2 05:09:55 localhost dnsmasq[324604]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:55 localhost dnsmasq[324604]: warning: no upstream servers configured Dec 2 05:09:55 localhost dnsmasq-dhcp[324604]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:55 localhost dnsmasq[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/addn_hosts - 0 addresses Dec 2 05:09:55 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/host Dec 2 05:09:55 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/opts Dec 2 05:09:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:55.252 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:55.290 2 INFO neutron.agent.securitygroups_rpc [None req-d1fab671-1814-41db-9614-65c239fa9e70 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:55.431 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:55.432 263406 INFO neutron.agent.dhcp.agent [None req-f7d0b580-ff82-41b4-865d-1e55b256aa67 - - - - - -] DHCP configuration for ports {'e183ec07-efa1-48df-8f31-d919e8b5836e'} is completed#033[00m Dec 2 05:09:56 localhost nova_compute[281854]: 2025-12-02 10:09:56.024 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:56 localhost nova_compute[281854]: 2025-12-02 10:09:56.077 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:09:56 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:56.308 2 INFO neutron.agent.securitygroups_rpc [None req-2709b5dd-db11-4508-a989-29103dd3702e 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:56 localhost dnsmasq[324387]: exiting on receipt of SIGTERM Dec 2 05:09:56 localhost podman[324623]: 2025-12-02 10:09:56.431037951 +0000 UTC m=+0.061598672 container kill 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:09:56 localhost systemd[1]: libpod-79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b.scope: Deactivated successfully. Dec 2 05:09:56 localhost podman[324635]: 2025-12-02 10:09:56.501670363 +0000 UTC m=+0.056676601 container died 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:09:56 localhost podman[324635]: 2025-12-02 10:09:56.5398008 +0000 UTC m=+0.094806998 container cleanup 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:09:56 localhost systemd[1]: libpod-conmon-79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b.scope: Deactivated successfully. Dec 2 05:09:56 localhost podman[324642]: 2025-12-02 10:09:56.588068396 +0000 UTC m=+0.128653940 container remove 79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:09:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:56.640 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:55Z, description=, device_id=7059c3fd-a028-4cdb-9894-b6db3dc33369, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80164a20-3f5d-4eea-94e1-e26ceeea882c, ip_allocation=immediate, mac_address=fa:16:3e:04:14:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:47Z, description=, dns_domain=, id=5f48cce7-247c-4b5d-8287-ac14f7453254, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-897890507-network, port_security_enabled=True, project_id=5bad680c763640dba71a7865b355817c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30386, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2315, status=ACTIVE, subnets=['424417bc-1ee7-4d11-9ebf-680585d829a5'], tags=[], tenant_id=5bad680c763640dba71a7865b355817c, updated_at=2025-12-02T10:09:49Z, vlan_transparent=None, network_id=5f48cce7-247c-4b5d-8287-ac14f7453254, port_security_enabled=False, project_id=5bad680c763640dba71a7865b355817c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2357, status=DOWN, tags=[], tenant_id=5bad680c763640dba71a7865b355817c, updated_at=2025-12-02T10:09:56Z on network 5f48cce7-247c-4b5d-8287-ac14f7453254#033[00m Dec 2 05:09:56 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:56.648 2 INFO neutron.agent.securitygroups_rpc [None req-4752b673-ecc5-46d9-8169-f464ead4adc9 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']#033[00m Dec 2 05:09:56 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:56.666 2 INFO neutron.agent.securitygroups_rpc [None req-237e1b13-6077-411f-87b4-3c14ff8061ce 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:56.770 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:56Z, description=, device_id=f6d749d1-1fc5-4651-88a9-dade37b0c49d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=49e6ac27-4822-4f2e-9d02-eb159ad3a2f0, ip_allocation=immediate, mac_address=fa:16:3e:b3:4f:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:50Z, description=, dns_domain=, id=5198bb66-dd27-48f3-9334-ab53b7335bc8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1795840910, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64348, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2327, status=ACTIVE, subnets=['b62f2cc5-7e42-4784-b31d-7caa26c4d241'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=5198bb66-dd27-48f3-9334-ab53b7335bc8, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2358, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:56Z on network 5198bb66-dd27-48f3-9334-ab53b7335bc8#033[00m Dec 2 05:09:56 localhost nova_compute[281854]: 2025-12-02 10:09:56.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:56 localhost nova_compute[281854]: 2025-12-02 10:09:56.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:09:56 localhost systemd[1]: var-lib-containers-storage-overlay-b078218d8ff167202cb16b55a3c321df829dc784041145cd6eb709858f6ad54d-merged.mount: Deactivated successfully. Dec 2 05:09:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79150746a9216edc8205f684f273bce18a18959b91236e548172f664aa0c724b-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:56 localhost dnsmasq[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/addn_hosts - 1 addresses Dec 2 05:09:56 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/host Dec 2 05:09:56 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/opts Dec 2 05:09:56 localhost podman[324702]: 2025-12-02 10:09:56.996791908 +0000 UTC m=+0.066578866 container kill c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:09:57 localhost podman[324743]: Dec 2 05:09:57 localhost podman[324743]: 2025-12-02 10:09:57.155566829 +0000 UTC m=+0.105217375 container create ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:57 localhost systemd[1]: Started libpod-conmon-ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829.scope. Dec 2 05:09:57 localhost podman[324743]: 2025-12-02 10:09:57.113762935 +0000 UTC m=+0.063413461 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:57 localhost systemd[1]: Started libcrun container. Dec 2 05:09:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/615057096d3c152d4d1f671a5bd383cdd6ec5d4ac33268b4ace59e5fc7761d1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:57 localhost podman[324743]: 2025-12-02 10:09:57.233803063 +0000 UTC m=+0.183453589 container init ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:09:57 localhost podman[324743]: 2025-12-02 10:09:57.243482382 +0000 UTC m=+0.193132998 container start ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:57 localhost dnsmasq[324773]: started, version 2.85 cachesize 150 Dec 2 05:09:57 localhost dnsmasq[324773]: DNS service limited to local subnets Dec 2 05:09:57 localhost dnsmasq[324773]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:57 localhost dnsmasq[324773]: warning: no upstream servers configured Dec 2 05:09:57 localhost dnsmasq-dhcp[324773]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:09:57 localhost dnsmasq[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/addn_hosts - 1 addresses Dec 2 05:09:57 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/host Dec 2 05:09:57 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/opts Dec 2 05:09:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:57.291 263406 INFO neutron.agent.dhcp.agent [None req-2336ef4e-d482-43e2-8a28-f574fcaa88f4 - - - - - -] DHCP configuration for ports {'49e6ac27-4822-4f2e-9d02-eb159ad3a2f0'} is completed#033[00m Dec 2 05:09:57 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:57.442 2 INFO neutron.agent.securitygroups_rpc [None req-84ee1250-9ce8-4943-9e17-d2eb70522c28 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']#033[00m Dec 2 05:09:57 localhost podman[324804]: Dec 2 05:09:57 localhost podman[324804]: 2025-12-02 10:09:57.567005333 +0000 UTC m=+0.092235629 container create e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:09:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:57.576 263406 INFO neutron.agent.dhcp.agent [None req-5bf11345-04cb-4b1b-b13f-a78f9fed4276 - - - - - -] DHCP configuration for ports {'80164a20-3f5d-4eea-94e1-e26ceeea882c'} is completed#033[00m Dec 2 05:09:57 localhost systemd[1]: Started libpod-conmon-e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978.scope. Dec 2 05:09:57 localhost podman[324804]: 2025-12-02 10:09:57.522001144 +0000 UTC m=+0.047231470 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:09:57 localhost systemd[1]: Started libcrun container. Dec 2 05:09:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c5b19e6915c8546eb393f0137d1cfffca610aac4aee4335fd5750356f1d42d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:09:57 localhost podman[324804]: 2025-12-02 10:09:57.637522873 +0000 UTC m=+0.162753169 container init e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:09:57 localhost podman[324804]: 2025-12-02 10:09:57.647603331 +0000 UTC m=+0.172833627 container start e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:09:57 localhost dnsmasq[324822]: started, version 2.85 cachesize 150 Dec 2 05:09:57 localhost dnsmasq[324822]: DNS service limited to local subnets Dec 2 05:09:57 localhost dnsmasq[324822]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:09:57 localhost dnsmasq[324822]: warning: no upstream servers configured Dec 2 05:09:57 localhost dnsmasq[324822]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:09:57 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:57.653 2 INFO neutron.agent.securitygroups_rpc [None req-f449fe39-4274-4abb-aff3-e3ba219c9fe2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.829 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:09:57 localhost systemd[1]: tmp-crun.Ai3RmK.mount: Deactivated successfully. Dec 2 05:09:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:57.898 263406 INFO neutron.agent.dhcp.agent [None req-a19147eb-1323-426a-9512-386b3545cb86 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '92c576f1-7fec-4bf8-bac6-a5142e33525f'} is completed#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.947 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.948 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.949 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:09:57 localhost nova_compute[281854]: 2025-12-02 10:09:57.949 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:09:57 localhost dnsmasq[324822]: exiting on receipt of SIGTERM Dec 2 05:09:57 localhost systemd[1]: tmp-crun.Xzsmy8.mount: Deactivated successfully. Dec 2 05:09:57 localhost systemd[1]: libpod-e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978.scope: Deactivated successfully. Dec 2 05:09:57 localhost podman[324840]: 2025-12-02 10:09:57.974491963 +0000 UTC m=+0.070321155 container kill e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:58 localhost podman[324852]: 2025-12-02 10:09:58.047119618 +0000 UTC m=+0.060854773 container died e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:09:58 localhost podman[324852]: 2025-12-02 10:09:58.110315312 +0000 UTC m=+0.124050407 container cleanup e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:09:58 localhost systemd[1]: libpod-conmon-e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978.scope: Deactivated successfully. Dec 2 05:09:58 localhost podman[324859]: 2025-12-02 10:09:58.174522953 +0000 UTC m=+0.175244780 container remove e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:09:58 localhost ovn_controller[154505]: 2025-12-02T10:09:58Z|00424|binding|INFO|Releasing lport 92c576f1-7fec-4bf8-bac6-a5142e33525f from this chassis (sb_readonly=0) Dec 2 05:09:58 localhost kernel: device tap92c576f1-7f left promiscuous mode Dec 2 05:09:58 localhost ovn_controller[154505]: 2025-12-02T10:09:58Z|00425|binding|INFO|Setting lport 92c576f1-7fec-4bf8-bac6-a5142e33525f down in Southbound Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.189 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:58.198 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fed5:4cac/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92c576f1-7fec-4bf8-bac6-a5142e33525f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:58.200 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 92c576f1-7fec-4bf8-bac6-a5142e33525f in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:09:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:58.203 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:58.204 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[94fec0b5-18a0-42f7-bfd6-78a187b4c10c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.217 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:58 localhost sshd[324883]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:09:58 localhost sshd[324885]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:09:58 localhost neutron_sriov_agent[256494]: 2025-12-02 10:09:58.481 2 INFO neutron.agent.securitygroups_rpc [None req-c9840334-22ed-4fcf-9fb8-d440584d45ac 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:09:58 localhost sshd[324887]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:09:58 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:09:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:09:58 localhost systemd[1]: var-lib-containers-storage-overlay-e6c5b19e6915c8546eb393f0137d1cfffca610aac4aee4335fd5750356f1d42d-merged.mount: Deactivated successfully. Dec 2 05:09:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1bfd94cb54279b9c7451b5402b6c2019cb751bd2299790659c1e363c08b6978-userdata-shm.mount: Deactivated successfully. Dec 2 05:09:58 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:09:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:58.950 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:56Z, description=, device_id=f6d749d1-1fc5-4651-88a9-dade37b0c49d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=49e6ac27-4822-4f2e-9d02-eb159ad3a2f0, ip_allocation=immediate, mac_address=fa:16:3e:b3:4f:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:50Z, description=, dns_domain=, id=5198bb66-dd27-48f3-9334-ab53b7335bc8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1795840910, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64348, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2327, status=ACTIVE, subnets=['b62f2cc5-7e42-4784-b31d-7caa26c4d241'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=5198bb66-dd27-48f3-9334-ab53b7335bc8, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2358, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:09:56Z on network 5198bb66-dd27-48f3-9334-ab53b7335bc8#033[00m Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.969 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.993 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.993 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:09:58 localhost nova_compute[281854]: 2025-12-02 10:09:58.993 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.014 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.015 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.016 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.016 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.017 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:09:59 localhost dnsmasq[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/addn_hosts - 1 addresses Dec 2 05:09:59 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/host Dec 2 05:09:59 localhost podman[324907]: 2025-12-02 10:09:59.178009936 +0000 UTC m=+0.063005440 container kill c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:09:59 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/opts Dec 2 05:09:59 localhost systemd[1]: tmp-crun.fgOlCr.mount: Deactivated successfully. Dec 2 05:09:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:59.286 263406 INFO neutron.agent.linux.ip_lib [None req-546f31c4-5bf8-41b9-9961-fb9737511c5f - - - - - -] Device tap6cf9ab6e-02 cannot be used as it has no MAC address#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.367 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost kernel: device tap6cf9ab6e-02 entered promiscuous mode Dec 2 05:09:59 localhost ovn_controller[154505]: 2025-12-02T10:09:59Z|00426|binding|INFO|Claiming lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b for this chassis. Dec 2 05:09:59 localhost ovn_controller[154505]: 2025-12-02T10:09:59Z|00427|binding|INFO|6cf9ab6e-0266-437d-b907-6e6749aa6e0b: Claiming unknown Dec 2 05:09:59 localhost NetworkManager[5965]: [1764670199.3749] manager: (tap6cf9ab6e-02): new Generic device (/org/freedesktop/NetworkManager/Devices/69) Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.377 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost systemd-udevd[324956]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:09:59 localhost ovn_controller[154505]: 2025-12-02T10:09:59Z|00428|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b ovn-installed in OVS Dec 2 05:09:59 localhost ovn_controller[154505]: 2025-12-02T10:09:59Z|00429|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b up in Southbound Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.386 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:59.387 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6cf9ab6e-0266-437d-b907-6e6749aa6e0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.389 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:59.391 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf9ab6e-0266-437d-b907-6e6749aa6e0b in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:09:59 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:59.394 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1ac3f767-f224-4e0f-9781-32978a5bc943 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:09:59 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:59.395 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:09:59 localhost ovn_metadata_agent[160216]: 2025-12-02 10:09:59.399 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc56732-e173-4a4a-8f62-c3e78a222e46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.421 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:09:59 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2398904436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.464 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.466 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:09:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:59.468 263406 INFO neutron.agent.dhcp.agent [None req-dc9d0775-f91f-4900-abb7-f0a752e25a49 - - - - - -] DHCP configuration for ports {'49e6ac27-4822-4f2e-9d02-eb159ad3a2f0'} is completed#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.503 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.539 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.539 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.702 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.703 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11207MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.703 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.703 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:09:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:09:59.819 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:55Z, description=, device_id=7059c3fd-a028-4cdb-9894-b6db3dc33369, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=80164a20-3f5d-4eea-94e1-e26ceeea882c, ip_allocation=immediate, mac_address=fa:16:3e:04:14:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:47Z, description=, dns_domain=, id=5f48cce7-247c-4b5d-8287-ac14f7453254, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-897890507-network, port_security_enabled=True, project_id=5bad680c763640dba71a7865b355817c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=30386, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2315, status=ACTIVE, subnets=['424417bc-1ee7-4d11-9ebf-680585d829a5'], tags=[], tenant_id=5bad680c763640dba71a7865b355817c, updated_at=2025-12-02T10:09:49Z, vlan_transparent=None, network_id=5f48cce7-247c-4b5d-8287-ac14f7453254, port_security_enabled=False, project_id=5bad680c763640dba71a7865b355817c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2357, status=DOWN, tags=[], tenant_id=5bad680c763640dba71a7865b355817c, updated_at=2025-12-02T10:09:56Z on network 5f48cce7-247c-4b5d-8287-ac14f7453254#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.827 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.828 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.828 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:09:59 localhost nova_compute[281854]: 2025-12-02 10:09:59.868 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:10:00 localhost systemd[1]: tmp-crun.zNIkIb.mount: Deactivated successfully. Dec 2 05:10:00 localhost dnsmasq[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/addn_hosts - 1 addresses Dec 2 05:10:00 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/host Dec 2 05:10:00 localhost podman[325009]: 2025-12-02 10:10:00.032477296 +0000 UTC m=+0.069080871 container kill ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:10:00 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/opts Dec 2 05:10:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:00.213 263406 INFO neutron.agent.dhcp.agent [None req-50696e38-84b6-4013-a7df-6126c21f36f8 - - - - - -] DHCP configuration for ports {'80164a20-3f5d-4eea-94e1-e26ceeea882c'} is completed#033[00m Dec 2 05:10:00 localhost podman[325072]: Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.322 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.328 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:10:00 localhost podman[325072]: 2025-12-02 10:10:00.32905001 +0000 UTC m=+0.075174073 container create dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.344 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:10:00 localhost systemd[1]: Started libpod-conmon-dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b.scope. Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.370 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.371 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:10:00 localhost systemd[1]: Started libcrun container. Dec 2 05:10:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4a62014718597189450d4d03bb09a5957565165ae07f83f474537569bc374c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:00 localhost podman[325072]: 2025-12-02 10:10:00.382987758 +0000 UTC m=+0.129111791 container init dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:00 localhost podman[325072]: 2025-12-02 10:10:00.388814853 +0000 UTC m=+0.134938886 container start dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:10:00 localhost podman[325072]: 2025-12-02 10:10:00.29901824 +0000 UTC m=+0.045142293 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:00 localhost dnsmasq[325091]: started, version 2.85 cachesize 150 Dec 2 05:10:00 localhost dnsmasq[325091]: DNS service limited to local subnets Dec 2 05:10:00 localhost dnsmasq[325091]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:00 localhost dnsmasq[325091]: warning: no upstream servers configured Dec 2 05:10:00 localhost dnsmasq-dhcp[325091]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:00 localhost dnsmasq[325091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:10:00 localhost dnsmasq-dhcp[325091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:10:00 localhost dnsmasq-dhcp[325091]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:10:00 localhost ovn_controller[154505]: 2025-12-02T10:10:00Z|00430|binding|INFO|Releasing lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b from this chassis (sb_readonly=0) Dec 2 05:10:00 localhost kernel: device tap6cf9ab6e-02 left promiscuous mode Dec 2 05:10:00 localhost ovn_controller[154505]: 2025-12-02T10:10:00Z|00431|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b down in Southbound Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.524 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:00.531 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6cf9ab6e-0266-437d-b907-6e6749aa6e0b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:00.532 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf9ab6e-0266-437d-b907-6e6749aa6e0b in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:10:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:00.534 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:00 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:00.535 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[21ba40b4-a4bc-43c9-88f4-15ea42832350]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.544 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:00 localhost nova_compute[281854]: 2025-12-02 10:10:00.545 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:00.551 263406 INFO neutron.agent.dhcp.agent [None req-a4f6292e-605d-4f4d-bb3c-256e0eaab931 - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955'} is completed#033[00m Dec 2 05:10:00 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:00.707 2 INFO neutron.agent.securitygroups_rpc [None req-3e03e95a-f561-4345-b792-65b4ec75916c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:00 localhost ceph-mon[298296]: overall HEALTH_OK Dec 2 05:10:01 localhost nova_compute[281854]: 2025-12-02 10:10:01.028 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:01 localhost nova_compute[281854]: 2025-12-02 10:10:01.078 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:01 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:01 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:01 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:01 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:02 localhost dnsmasq[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/addn_hosts - 0 addresses Dec 2 05:10:02 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/host Dec 2 05:10:02 localhost dnsmasq-dhcp[324604]: read /var/lib/neutron/dhcp/5198bb66-dd27-48f3-9334-ab53b7335bc8/opts Dec 2 05:10:02 localhost podman[325111]: 2025-12-02 10:10:02.270806755 +0000 UTC m=+0.040578772 container kill c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:10:02 localhost systemd[1]: tmp-crun.10D4YQ.mount: Deactivated successfully. Dec 2 05:10:02 localhost podman[325125]: 2025-12-02 10:10:02.346738939 +0000 UTC m=+0.059168078 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:02 localhost podman[325125]: 2025-12-02 10:10:02.359041307 +0000 UTC m=+0.071470406 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:02 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.438 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost kernel: device tapb822caec-0f left promiscuous mode Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00432|binding|INFO|Releasing lport b822caec-0ff2-45ee-8f19-f63f7afb253f from this chassis (sb_readonly=0) Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00433|binding|INFO|Setting lport b822caec-0ff2-45ee-8f19-f63f7afb253f down in Southbound Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.445 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-5198bb66-dd27-48f3-9334-ab53b7335bc8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5198bb66-dd27-48f3-9334-ab53b7335bc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e4dc3a6-b635-4672-b1fb-6fb25996b32e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b822caec-0ff2-45ee-8f19-f63f7afb253f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.447 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b822caec-0ff2-45ee-8f19-f63f7afb253f in datapath 5198bb66-dd27-48f3-9334-ab53b7335bc8 unbound from our chassis#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.448 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5198bb66-dd27-48f3-9334-ab53b7335bc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.449 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[04701ea4-523b-4921-9730-f970fa223c05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.460 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:02.847 263406 INFO neutron.agent.linux.ip_lib [None req-4c811050-eaf6-4394-8106-82618af8c12d - - - - - -] Device tapc38cde41-ba cannot be used as it has no MAC address#033[00m Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.921 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost kernel: device tapc38cde41-ba entered promiscuous mode Dec 2 05:10:02 localhost NetworkManager[5965]: [1764670202.9314] manager: (tapc38cde41-ba): new Generic device (/org/freedesktop/NetworkManager/Devices/70) Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.930 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00434|binding|INFO|Claiming lport c38cde41-bad1-4f80-97fd-24542d25f8b3 for this chassis. Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00435|binding|INFO|c38cde41-bad1-4f80-97fd-24542d25f8b3: Claiming unknown Dec 2 05:10:02 localhost systemd-udevd[325162]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.946 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.945 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d7f4c8-6f79-49e1-9d65-a5856ac6b73d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c38cde41-bad1-4f80-97fd-24542d25f8b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.948 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c38cde41-bad1-4f80-97fd-24542d25f8b3 in datapath 41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd bound to our chassis#033[00m Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00436|binding|INFO|Setting lport c38cde41-bad1-4f80-97fd-24542d25f8b3 ovn-installed in OVS Dec 2 05:10:02 localhost ovn_controller[154505]: 2025-12-02T10:10:02Z|00437|binding|INFO|Setting lport c38cde41-bad1-4f80-97fd-24542d25f8b3 up in Southbound Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.951 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.952 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:02.952 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2a257ef1-5566-46ca-a63d-e17977fa0829]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost nova_compute[281854]: 2025-12-02 10:10:02.968 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:02 localhost journal[230136]: ethtool ioctl error on tapc38cde41-ba: No such device Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.005 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.035 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.053 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.053 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.054 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:10:03 localhost dnsmasq[325091]: exiting on receipt of SIGTERM Dec 2 05:10:03 localhost podman[325209]: 2025-12-02 10:10:03.175773472 +0000 UTC m=+0.064403067 container kill dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:10:03 localhost systemd[1]: libpod-dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b.scope: Deactivated successfully. Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.205 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:03 localhost podman[325228]: 2025-12-02 10:10:03.244933985 +0000 UTC m=+0.046924962 container died dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:10:03 localhost systemd[1]: var-lib-containers-storage-overlay-4b4a62014718597189450d4d03bb09a5957565165ae07f83f474537569bc374c-merged.mount: Deactivated successfully. Dec 2 05:10:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:03 localhost podman[325228]: 2025-12-02 10:10:03.29391674 +0000 UTC m=+0.095907687 container remove dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:10:03 localhost systemd[1]: libpod-conmon-dae6d72e9c18774b5e51d9e39587d0d8c31c301c25915e3b0f4617851eb2357b.scope: Deactivated successfully. Dec 2 05:10:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:03.385 263406 INFO neutron.agent.linux.ip_lib [None req-9ae0b500-4ffd-4e9d-8c95-6328544983eb - - - - - -] Device tap6cf9ab6e-02 cannot be used as it has no MAC address#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.417 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost kernel: device tap6cf9ab6e-02 entered promiscuous mode Dec 2 05:10:03 localhost NetworkManager[5965]: [1764670203.4224] manager: (tap6cf9ab6e-02): new Generic device (/org/freedesktop/NetworkManager/Devices/71) Dec 2 05:10:03 localhost ovn_controller[154505]: 2025-12-02T10:10:03Z|00438|binding|INFO|Claiming lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b for this chassis. Dec 2 05:10:03 localhost ovn_controller[154505]: 2025-12-02T10:10:03Z|00439|binding|INFO|6cf9ab6e-0266-437d-b907-6e6749aa6e0b: Claiming unknown Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.422 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost systemd-udevd[325165]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:03 localhost ovn_controller[154505]: 2025-12-02T10:10:03Z|00440|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b ovn-installed in OVS Dec 2 05:10:03 localhost ovn_controller[154505]: 2025-12-02T10:10:03Z|00441|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b up in Southbound Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.429 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.432 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.432 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:7a24/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6cf9ab6e-0266-437d-b907-6e6749aa6e0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.435 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf9ab6e-0266-437d-b907-6e6749aa6e0b in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 bound to our chassis#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.439 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1ac3f767-f224-4e0f-9781-32978a5bc943 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.440 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:03.441 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[94adb680-4c97-4d91-8410-2cbb60f5ca8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.469 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.517 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.546 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:03 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:03.550 2 INFO neutron.agent.securitygroups_rpc [None req-7ff6a690-1608-4241-962d-cf0eb5f2eb30 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:10:03 localhost nova_compute[281854]: 2025-12-02 10:10:03.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:04 localhost podman[325335]: Dec 2 05:10:04 localhost dnsmasq[324604]: exiting on receipt of SIGTERM Dec 2 05:10:04 localhost podman[325343]: 2025-12-02 10:10:04.027267993 +0000 UTC m=+0.077875646 container kill c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:04 localhost systemd[1]: libpod-c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5.scope: Deactivated successfully. Dec 2 05:10:04 localhost openstack_network_exporter[242845]: ERROR 10:10:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:10:04 localhost openstack_network_exporter[242845]: ERROR 10:10:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:10:04 localhost openstack_network_exporter[242845]: Dec 2 05:10:04 localhost openstack_network_exporter[242845]: ERROR 10:10:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:10:04 localhost openstack_network_exporter[242845]: Dec 2 05:10:04 localhost podman[325335]: 2025-12-02 10:10:04.060404556 +0000 UTC m=+0.136144839 container create b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:04 localhost podman[325335]: 2025-12-02 10:10:03.972081573 +0000 UTC m=+0.047821926 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:04 localhost openstack_network_exporter[242845]: ERROR 10:10:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:04 localhost openstack_network_exporter[242845]: ERROR 10:10:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:04 localhost systemd[1]: Started libpod-conmon-b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7.scope. Dec 2 05:10:04 localhost systemd[1]: Started libcrun container. Dec 2 05:10:04 localhost podman[325370]: 2025-12-02 10:10:04.1363794 +0000 UTC m=+0.085129759 container died c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ead580261bee1ee4aad75a4d864a42a4a9d764055292bd9533e5e426eea53df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:04 localhost podman[325335]: 2025-12-02 10:10:04.144934628 +0000 UTC m=+0.220674901 container init b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 05:10:04 localhost podman[325335]: 2025-12-02 10:10:04.15734909 +0000 UTC m=+0.233089373 container start b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:04 localhost dnsmasq[325398]: started, version 2.85 cachesize 150 Dec 2 05:10:04 localhost dnsmasq[325398]: DNS service limited to local subnets Dec 2 05:10:04 localhost dnsmasq[325398]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:04 localhost dnsmasq[325398]: warning: no upstream servers configured Dec 2 05:10:04 localhost dnsmasq-dhcp[325398]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:10:04 localhost dnsmasq[325398]: read /var/lib/neutron/dhcp/41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd/addn_hosts - 0 addresses Dec 2 05:10:04 localhost dnsmasq-dhcp[325398]: read /var/lib/neutron/dhcp/41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd/host Dec 2 05:10:04 localhost dnsmasq-dhcp[325398]: read /var/lib/neutron/dhcp/41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd/opts Dec 2 05:10:04 localhost podman[325370]: 2025-12-02 10:10:04.179147541 +0000 UTC m=+0.127897870 container remove c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5198bb66-dd27-48f3-9334-ab53b7335bc8, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:04 localhost systemd[1]: libpod-conmon-c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5.scope: Deactivated successfully. Dec 2 05:10:04 localhost systemd[1]: var-lib-containers-storage-overlay-c1c0a1d963187de3e3c282bd1aa2fa59ad7e13adf0d6aa09786d03fa9f921cd1-merged.mount: Deactivated successfully. Dec 2 05:10:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3b4269865201f3fcf2d79f914e08f06684ae4410385bd9261992e76dad564f5-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.366 263406 INFO neutron.agent.dhcp.agent [None req-3fc1f4d3-a5f2-44fd-badd-d1878b0bc4b8 - - - - - -] DHCP configuration for ports {'c5921849-40fe-4620-9d7d-bf845e6fc8f7'} is completed#033[00m Dec 2 05:10:04 localhost systemd[1]: run-netns-qdhcp\x2d5198bb66\x2ddd27\x2d48f3\x2d9334\x2dab53b7335bc8.mount: Deactivated successfully. Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.404 263406 INFO neutron.agent.dhcp.agent [None req-b8fb038f-ce8a-4e7b-9b19-e65c62cf9f81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.405 263406 INFO neutron.agent.dhcp.agent [None req-b8fb038f-ce8a-4e7b-9b19-e65c62cf9f81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:04.407 2 INFO neutron.agent.securitygroups_rpc [None req-90a76f3a-a979-402e-97fd-700e856a8199 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']#033[00m Dec 2 05:10:04 localhost podman[325421]: Dec 2 05:10:04 localhost podman[325421]: 2025-12-02 10:10:04.425727922 +0000 UTC m=+0.099820252 container create 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:04 localhost systemd[1]: Started libpod-conmon-23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050.scope. Dec 2 05:10:04 localhost podman[325421]: 2025-12-02 10:10:04.378318838 +0000 UTC m=+0.052411238 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:04 localhost systemd[1]: Started libcrun container. Dec 2 05:10:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8d065ef74c831797f48f3e133ab46edcfa70b7eeb8c3d98871e70265859073/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:04 localhost podman[325421]: 2025-12-02 10:10:04.497306259 +0000 UTC m=+0.171398589 container init 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:10:04 localhost podman[325421]: 2025-12-02 10:10:04.503640548 +0000 UTC m=+0.177732888 container start 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:04 localhost dnsmasq[325439]: started, version 2.85 cachesize 150 Dec 2 05:10:04 localhost dnsmasq[325439]: DNS service limited to local subnets Dec 2 05:10:04 localhost dnsmasq[325439]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:04 localhost dnsmasq[325439]: warning: no upstream servers configured Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:04 localhost dnsmasq[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.549 263406 INFO neutron.agent.dhcp.agent [None req-9ae0b500-4ffd-4e9d-8c95-6328544983eb - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:03Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=e29fa419-e9ff-497c-948a-66ee2d0016ec, ip_allocation=immediate, mac_address=fa:16:3e:0f:cf:b4, name=tempest-NetworksTestDHCPv6-219671849, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:29Z, description=, dns_domain=, id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-53840882, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13733, qos_policy_id=None, revision_number=61, router:external=False, shared=False, standard_attr_id=1669, status=ACTIVE, subnets=['d338963b-de10-4ec8-85df-80dcebdfe976', 'f0b3b782-77ce-4779-87aa-9e999cbd999a'], tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:09:58Z, vlan_transparent=None, network_id=7d517d9d-ba68-4c0f-b344-6c3be9d614a4, port_security_enabled=True, project_id=39113116e26e4da3a6194d2f44d952a8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['062c5d07-6a15-41a5-85bf-27aede3f5276'], standard_attr_id=2382, status=DOWN, tags=[], tenant_id=39113116e26e4da3a6194d2f44d952a8, updated_at=2025-12-02T10:10:03Z on network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4#033[00m Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.645 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:04 localhost podman[325458]: 2025-12-02 10:10:04.750923578 +0000 UTC m=+0.059155347 container kill 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:10:04 localhost dnsmasq[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 2 addresses Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:10:04 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:10:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:04.928 2 INFO neutron.agent.securitygroups_rpc [None req-c77326cb-1074-4872-8ee0-28f281df7dfe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:04.969 263406 INFO neutron.agent.dhcp.agent [None req-cd4d733a-b346-46d0-b65e-141f452d51fe - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '6cf9ab6e-0266-437d-b907-6e6749aa6e0b'} is completed#033[00m Dec 2 05:10:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:05.084 263406 INFO neutron.agent.dhcp.agent [None req-b2c039fa-be99-4594-bbf1-d5d7ab02bb29 - - - - - -] DHCP configuration for ports {'e29fa419-e9ff-497c-948a-66ee2d0016ec'} is completed#033[00m Dec 2 05:10:05 localhost dnsmasq[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:10:05 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:10:05 localhost dnsmasq-dhcp[325439]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:10:05 localhost podman[325495]: 2025-12-02 10:10:05.119761026 +0000 UTC m=+0.059357272 container kill 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:10:05 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:05 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:05 localhost ovn_controller[154505]: 2025-12-02T10:10:05Z|00442|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:10:05 localhost nova_compute[281854]: 2025-12-02 10:10:05.438 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:05.448 2 INFO neutron.agent.securitygroups_rpc [None req-9cd60b66-d893-4669-ab17-1eefaaf90d0c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:05 localhost dnsmasq[325439]: exiting on receipt of SIGTERM Dec 2 05:10:05 localhost podman[325534]: 2025-12-02 10:10:05.635302485 +0000 UTC m=+0.056183258 container kill 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:05 localhost systemd[1]: libpod-23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050.scope: Deactivated successfully. Dec 2 05:10:05 localhost podman[325548]: 2025-12-02 10:10:05.689572422 +0000 UTC m=+0.040895091 container died 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:05 localhost podman[325548]: 2025-12-02 10:10:05.771387262 +0000 UTC m=+0.122709911 container cleanup 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:10:05 localhost systemd[1]: libpod-conmon-23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050.scope: Deactivated successfully. Dec 2 05:10:05 localhost podman[325550]: 2025-12-02 10:10:05.790544822 +0000 UTC m=+0.131246678 container remove 23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:10:05 localhost nova_compute[281854]: 2025-12-02 10:10:05.822 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281854]: 2025-12-02 10:10:05.852 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281854]: 2025-12-02 10:10:05.853 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:05 localhost nova_compute[281854]: 2025-12-02 10:10:05.854 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:06 localhost nova_compute[281854]: 2025-12-02 10:10:06.031 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost podman[240799]: time="2025-12-02T10:10:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:10:06 localhost podman[240799]: @ - - [02/Dec/2025:10:10:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157931 "" "Go-http-client/1.1" Dec 2 05:10:06 localhost nova_compute[281854]: 2025-12-02 10:10:06.081 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:06 localhost podman[240799]: @ - - [02/Dec/2025:10:10:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19722 "" "Go-http-client/1.1" Dec 2 05:10:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:06.090 2 INFO neutron.agent.securitygroups_rpc [None req-9685cf0d-187e-491c-a3b3-f6b6113116e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:06 localhost systemd[1]: tmp-crun.DXkfrn.mount: Deactivated successfully. Dec 2 05:10:06 localhost systemd[1]: var-lib-containers-storage-overlay-9f8d065ef74c831797f48f3e133ab46edcfa70b7eeb8c3d98871e70265859073-merged.mount: Deactivated successfully. Dec 2 05:10:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23bf4b2b62bb0b900376b9d626ea3d411d4402f6fd716a4f4717d78a8a152050-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:06.719 2 INFO neutron.agent.securitygroups_rpc [None req-3503b005-9d9f-48d0-a8b4-a51ac4fa455f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:06 localhost podman[325626]: 2025-12-02 10:10:06.634045441 +0000 UTC m=+0.046540942 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:07 localhost podman[325626]: Dec 2 05:10:07 localhost podman[325626]: 2025-12-02 10:10:07.022913994 +0000 UTC m=+0.435409475 container create 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:10:07 localhost systemd[1]: Started libpod-conmon-35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462.scope. Dec 2 05:10:07 localhost systemd[1]: Started libcrun container. Dec 2 05:10:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/641192bd43133d9a6db497be22fcb6a0a7e6bed3bb5a2e42ffba83b803902053/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:07 localhost podman[325626]: 2025-12-02 10:10:07.084998868 +0000 UTC m=+0.497494339 container init 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:10:07 localhost podman[325626]: 2025-12-02 10:10:07.094777889 +0000 UTC m=+0.507273370 container start 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:07 localhost dnsmasq[325657]: started, version 2.85 cachesize 150 Dec 2 05:10:07 localhost dnsmasq[325657]: DNS service limited to local subnets Dec 2 05:10:07 localhost dnsmasq[325657]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:07 localhost dnsmasq[325657]: warning: no upstream servers configured Dec 2 05:10:07 localhost dnsmasq-dhcp[325657]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 2 05:10:07 localhost dnsmasq[325657]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/addn_hosts - 0 addresses Dec 2 05:10:07 localhost dnsmasq-dhcp[325657]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/host Dec 2 05:10:07 localhost dnsmasq-dhcp[325657]: read /var/lib/neutron/dhcp/7d517d9d-ba68-4c0f-b344-6c3be9d614a4/opts Dec 2 05:10:07 localhost podman[325641]: 2025-12-02 10:10:07.153045481 +0000 UTC m=+0.093958324 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:10:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:10:07 localhost podman[325641]: 2025-12-02 10:10:07.189320748 +0000 UTC m=+0.130233481 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:10:07 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:10:07 localhost podman[325668]: 2025-12-02 10:10:07.277868448 +0000 UTC m=+0.082504510 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:10:07 localhost podman[325668]: 2025-12-02 10:10:07.350277078 +0000 UTC m=+0.154913130 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:07 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:10:07 localhost dnsmasq[325398]: exiting on receipt of SIGTERM Dec 2 05:10:07 localhost podman[325708]: 2025-12-02 10:10:07.44867061 +0000 UTC m=+0.032173379 container kill b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:07 localhost systemd[1]: libpod-b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7.scope: Deactivated successfully. Dec 2 05:10:07 localhost podman[325722]: 2025-12-02 10:10:07.506745898 +0000 UTC m=+0.045455103 container died b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:10:07 localhost podman[325722]: 2025-12-02 10:10:07.540539488 +0000 UTC m=+0.079248653 container cleanup b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:07 localhost systemd[1]: libpod-conmon-b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7.scope: Deactivated successfully. Dec 2 05:10:07 localhost podman[325723]: 2025-12-02 10:10:07.580154714 +0000 UTC m=+0.115662064 container remove b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:10:07 localhost ovn_controller[154505]: 2025-12-02T10:10:07Z|00443|binding|INFO|Releasing lport c38cde41-bad1-4f80-97fd-24542d25f8b3 from this chassis (sb_readonly=0) Dec 2 05:10:07 localhost ovn_controller[154505]: 2025-12-02T10:10:07Z|00444|binding|INFO|Setting lport c38cde41-bad1-4f80-97fd-24542d25f8b3 down in Southbound Dec 2 05:10:07 localhost nova_compute[281854]: 2025-12-02 10:10:07.590 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:07 localhost kernel: device tapc38cde41-ba left promiscuous mode Dec 2 05:10:07 localhost nova_compute[281854]: 2025-12-02 10:10:07.616 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.675 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5d7f4c8-6f79-49e1-9d65-a5856ac6b73d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c38cde41-bad1-4f80-97fd-24542d25f8b3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.677 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c38cde41-bad1-4f80-97fd-24542d25f8b3 in datapath 41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd unbound from our chassis#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.680 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41cab9a4-eb0b-40d9-b339-ce18b4e6b4dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.682 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6e07aad2-48c5-400f-8df8-69f0fba28507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:07 localhost dnsmasq[325657]: exiting on receipt of SIGTERM Dec 2 05:10:07 localhost podman[325766]: 2025-12-02 10:10:07.708029601 +0000 UTC m=+0.043704015 container kill 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:07 localhost systemd[1]: libpod-35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462.scope: Deactivated successfully. Dec 2 05:10:07 localhost podman[325779]: 2025-12-02 10:10:07.749817765 +0000 UTC m=+0.031317616 container died 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:10:07 localhost podman[325779]: 2025-12-02 10:10:07.774935314 +0000 UTC m=+0.056435165 container cleanup 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:10:07 localhost systemd[1]: libpod-conmon-35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462.scope: Deactivated successfully. Dec 2 05:10:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:07.786 263406 INFO neutron.agent.dhcp.agent [None req-854a9495-18ab-423d-9718-9bc823f2244e - - - - - -] DHCP configuration for ports {'a59d5a92-7a77-419d-a87f-fbb46ea78955', '6cf9ab6e-0266-437d-b907-6e6749aa6e0b'} is completed#033[00m Dec 2 05:10:07 localhost podman[325781]: 2025-12-02 10:10:07.862577481 +0000 UTC m=+0.134202528 container remove 35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7d517d9d-ba68-4c0f-b344-6c3be9d614a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:10:07 localhost ovn_controller[154505]: 2025-12-02T10:10:07Z|00445|binding|INFO|Releasing lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b from this chassis (sb_readonly=0) Dec 2 05:10:07 localhost kernel: device tap6cf9ab6e-02 left promiscuous mode Dec 2 05:10:07 localhost ovn_controller[154505]: 2025-12-02T10:10:07Z|00446|binding|INFO|Setting lport 6cf9ab6e-0266-437d-b907-6e6749aa6e0b down in Southbound Dec 2 05:10:07 localhost nova_compute[281854]: 2025-12-02 10:10:07.874 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.894 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feed:7a24/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6cf9ab6e-0266-437d-b907-6e6749aa6e0b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:07 localhost nova_compute[281854]: 2025-12-02 10:10:07.897 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.900 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6cf9ab6e-0266-437d-b907-6e6749aa6e0b in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 unbound from our chassis#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.905 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:07 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:07.907 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d34813-ace5-4d72-bfa7-191c89f505e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:08 localhost systemd[1]: var-lib-containers-storage-overlay-641192bd43133d9a6db497be22fcb6a0a7e6bed3bb5a2e42ffba83b803902053-merged.mount: Deactivated successfully. Dec 2 05:10:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35466a8f33ac14ef1e8b8b36fdea58085e9237966f543dcc97b2b64670e27462-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:08 localhost systemd[1]: var-lib-containers-storage-overlay-6ead580261bee1ee4aad75a4d864a42a4a9d764055292bd9533e5e426eea53df-merged.mount: Deactivated successfully. Dec 2 05:10:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b27895503a52511bce829e24c9171e68a5eb020749f8c188e7e17280d3ae12d7-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:08.318 263406 INFO neutron.agent.dhcp.agent [None req-83a49769-21fc-4e0e-bbb5-d1d5644746ac - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:08 localhost systemd[1]: run-netns-qdhcp\x2d41cab9a4\x2deb0b\x2d40d9\x2db339\x2dce18b4e6b4dd.mount: Deactivated successfully. Dec 2 05:10:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:08 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:08 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:08.544 263406 INFO neutron.agent.dhcp.agent [None req-5d6ed0a0-fb71-407d-b47c-f30eb07d514f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:08 localhost systemd[1]: run-netns-qdhcp\x2d7d517d9d\x2dba68\x2d4c0f\x2db344\x2d6c3be9d614a4.mount: Deactivated successfully. Dec 2 05:10:11 localhost nova_compute[281854]: 2025-12-02 10:10:11.034 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:11 localhost nova_compute[281854]: 2025-12-02 10:10:11.083 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:11.188 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:11.363 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:11 localhost ovn_controller[154505]: 2025-12-02T10:10:11Z|00447|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:10:11 localhost nova_compute[281854]: 2025-12-02 10:10:11.671 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:14.383 2 INFO neutron.agent.securitygroups_rpc [None req-aa29d14d-f8b7-4441-acc0-85287ab48c6d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:15.079 2 INFO neutron.agent.securitygroups_rpc [None req-976c47c9-ae22-4daa-9b62-4c3bf838f3e2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:15.590 2 INFO neutron.agent.securitygroups_rpc [None req-90e76c3f-20a5-47a4-903d-78b983867e31 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:15 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:15.917 2 INFO neutron.agent.securitygroups_rpc [None req-5542fe4c-166b-4ffa-972c-029f68af7f10 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:16.016 263406 INFO neutron.agent.linux.ip_lib [None req-5ea4ad5d-c409-4169-84b1-84707fe2e0ae - - - - - -] Device tap32986807-4a cannot be used as it has no MAC address#033[00m Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.036 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.044 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost kernel: device tap32986807-4a entered promiscuous mode Dec 2 05:10:16 localhost NetworkManager[5965]: [1764670216.0530] manager: (tap32986807-4a): new Generic device (/org/freedesktop/NetworkManager/Devices/72) Dec 2 05:10:16 localhost ovn_controller[154505]: 2025-12-02T10:10:16Z|00448|binding|INFO|Claiming lport 32986807-4a62-4af8-ad03-9336f56fbec0 for this chassis. Dec 2 05:10:16 localhost ovn_controller[154505]: 2025-12-02T10:10:16Z|00449|binding|INFO|32986807-4a62-4af8-ad03-9336f56fbec0: Claiming unknown Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.056 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost systemd-udevd[325818]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:16.064 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d25b6f5f-b086-4558-a0fb-fc54d0ecba34, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=32986807-4a62-4af8-ad03-9336f56fbec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:16.066 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 32986807-4a62-4af8-ad03-9336f56fbec0 in datapath 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07 bound to our chassis#033[00m Dec 2 05:10:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:16.068 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:16.069 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[90f18f69-a4f5-451e-a0e7-95ab5d079358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost ovn_controller[154505]: 2025-12-02T10:10:16Z|00450|binding|INFO|Setting lport 32986807-4a62-4af8-ad03-9336f56fbec0 ovn-installed in OVS Dec 2 05:10:16 localhost ovn_controller[154505]: 2025-12-02T10:10:16Z|00451|binding|INFO|Setting lport 32986807-4a62-4af8-ad03-9336f56fbec0 up in Southbound Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.084 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.107 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.109 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.117 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost journal[230136]: ethtool ioctl error on tap32986807-4a: No such device Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.133 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cc33439d-fe6e-4e8a-86e3-91b7272799f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:10:16.109437', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '1949f2fe-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.352466151, 'message_signature': 'c37427566a9770c4266d5c3a04620520137d80e6dfdfb3e0266cd49485214ef5'}]}, 'timestamp': '2025-12-02 10:10:16.134638', '_unique_id': 'ecdcb95a34104229bc9462104c1b3798'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.136 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.138 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.142 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '30883092-0aed-4957-9b36-1c1308c7e09e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.138479', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '194b423a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': '10df04daffd25d6e4fa60f09e165f14f29f3d118beba6387b069a99feda8bdf5'}]}, 'timestamp': '2025-12-02 10:10:16.143154', '_unique_id': '24c4280373934464a997088423d5a51c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.145 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:10:16 localhost nova_compute[281854]: 2025-12-02 10:10:16.157 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.173 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.174 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fff54991-3146-4542-9ab0-a53826a9dde4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.146212', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '194ffdca-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'e80cd05c0b70a3cbadf8d806c4d8f5fa022fe085a1019854ade5b3164a78dcee'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.146212', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19500d4c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '2a3a102079be4a20ad0e6467e2cd70155d5af12269c4d1e4dae5ac12809bfecd'}]}, 'timestamp': '2025-12-02 10:10:16.174463', '_unique_id': 'a12ef04f78e14a20979100bacbb3704e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.176 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.176 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.177 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.177 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d7ec6b0-2a5c-443a-9c6a-c49eeed50c07', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.177074', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19508100-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'dcec819b26e36d08be7be256b777e606e2b9fc2c79f9d494235f4abd1da696be'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.177074', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19509028-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '8b75786dabbd1ccc028340d1dbdac2ceef6ce6e814087a006703b3bdf9028bd2'}]}, 'timestamp': '2025-12-02 10:10:16.177805', '_unique_id': '927c3925dd2749f4b862da18523fe247'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.178 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.179 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0215f8a0-276c-4163-85a3-86573be209df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.179572', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '1950e424-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': '8fe27ed67fa14abee8e901c53bfb6b91b775b8fee7a016b6d7baee48529fd85d'}]}, 'timestamp': '2025-12-02 10:10:16.179969', '_unique_id': 'cb4a898386da452fa2396bbcd5ac8bd9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.181 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fae34dab-ea2b-42ce-8538-47761b74778c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.182071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19514478-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '9d7e4ab09f672e95a4abdadb93afb7f4b93ca0cb535a5ed5dd0c31732106701f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.182071', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '195154ae-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'dad63abdda738de45c0be74908f510335b796b5340fe24c7ee9a0068ddab1533'}]}, 'timestamp': '2025-12-02 10:10:16.182854', '_unique_id': '0e92621fccc04a87b7bce6ebafbdf1be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.184 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4bc219c4-0aae-4d14-960e-249d4a31f7dc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.184843', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '1951b20a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': '5b29dc4b93340cd2a5d677ae27669aa2ffd366bbfb846d2a8361014f97cb7656'}]}, 'timestamp': '2025-12-02 10:10:16.185288', '_unique_id': '4a8af8df0eb3410baaafe2346c464ecf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5353a060-5a4c-41ff-92c6-b081728be49f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.187238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19520da4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '6562a5d84e0bd2de423a3346d3b7e3916ddbf8b4f15f6353f8891392d5297601'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.187238', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19521aec-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'a527a540c363c2ebd881e938ec7d4d2be7bf53ea3be8ebf6d74c53c9dc850deb'}]}, 'timestamp': '2025-12-02 10:10:16.187915', '_unique_id': '1e8061a8f90c4560adc6b4625f3818f8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.189 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.189 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1fa5396e-2932-4316-a8fd-db04cdf9f523', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.189867', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1952755a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'c55bbc5fedfd297d4def519f52153d6407ab09074dd35f3ef2bffc1e102ba589'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.189867', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '195281b2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '01155353d5cb9a0a5857925e73e70958f3508df55d63e1ca17af558489ca9e89'}]}, 'timestamp': '2025-12-02 10:10:16.190535', '_unique_id': 'd07e39db30a54707bc027b5bfa462b7d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.192 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.202 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b37d9ffa-98cc-4e25-a181-309a1f2ea829', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.192298', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19544f06-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '9b2d22d5b1251c195302c882236f76c5181785b5ec0960c5e9ca4c996361c837'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.192298', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19545bf4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '168ffcd4cb49466cc1c5c7c60f77fe6b94dea8bf48899d2353d4ab8082e2d522'}]}, 'timestamp': '2025-12-02 10:10:16.202707', '_unique_id': '5eedb5e2d05042a9939ad20bd6a97ff3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.203 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.204 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.204 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '53c5030e-7ccc-45ea-a5e4-14e87a869938', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.204520', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '1954b1f8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': 'ec6d517dba794707c86df6d3d96e3ebc80e4cf56dd10750ff022bfd900993c04'}]}, 'timestamp': '2025-12-02 10:10:16.204905', '_unique_id': 'd333e085e55f499fadbc234004ed6d54'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df424f8f-b43b-45c3-91a6-b33d8bb13ee0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.206669', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1955048c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '8f64f121a5f8f136a4fd176bfc7b9f306f7f54edca43c4b0d83a31246cbf5222'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.206669', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19551094-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '0bdfdf1ec6f18e86075e9fbda7f8789d6a2d5a65d3f1144e8bcc0c7ae6b0a7ce'}]}, 'timestamp': '2025-12-02 10:10:16.207306', '_unique_id': '3246ea5dc23d461c92bcba075c706eef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a79b561-8f99-488a-8d6a-dcb44ebe3667', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.209062', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '19556210-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': 'cd439f880027198127fc70e969b9b7d01305a0f62c1bd17e505c98113ee07757'}]}, 'timestamp': '2025-12-02 10:10:16.209410', '_unique_id': '5af9199f1182440090a092eb7393a67b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.211 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.211 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82563115-f345-4e5d-bbd4-ca1fc289155c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.211180', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1955b4a4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '62cd4db41904eb24d16af2d7f04e9cbe5c68cbb706a7729b2055f868adf9944c'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.211180', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1955c1ce-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.411375591, 'message_signature': '2b9933e03a3c07178d477e1608893d03de38c64bf5405756a2f1cb115e803d4e'}]}, 'timestamp': '2025-12-02 10:10:16.211843', '_unique_id': '907c7fe5ad8d4c19b0a56aeb89ebb6dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.213 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.214 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5803ee1-3a98-433b-b77e-8dc351808aa8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:10:16.213763', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '19561980-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': '8dddafe28d3db9fbfbdb8df783c0d1b176281c59b99d1712a546810d09cc3c6f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:10:16.213763', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '19562588-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.365295593, 'message_signature': 'f6d285573e50887393a753ea10e656b13d08642676cd161f150c3117983c38e1'}]}, 'timestamp': '2025-12-02 10:10:16.214400', '_unique_id': 'aab4a98b5bfd4512b0ab8691b796b759'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.216 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e05347e7-99af-41fd-b47d-ac50f2a9eb86', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.216128', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '195675f6-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': '756642f5eb1d8a3293c40eef4444a4455a1477545c652bb1b07a2ec2417e7471'}]}, 'timestamp': '2025-12-02 10:10:16.216475', '_unique_id': '02b468c066004b8fa148fd585bc334f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.217 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.218 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.218 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e3207bf1-1d67-4835-ae26-adfc0c91c59f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.218669', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '1956d974-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': 'b2037c26d5ed59e4cf3ac64e3593c1dc07b7087cb7853fe15a9f4bec2ebf1407'}]}, 'timestamp': '2025-12-02 10:10:16.219018', '_unique_id': '6acb1915bf534975ad99243255ba02cb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 18350000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '595557c5-81bc-4f96-b5e2-83db8d91fd6a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18350000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:10:16.220711', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '195728fc-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.352466151, 'message_signature': 'a88c92a94ec3dacdb023d2ed4d4253147511e4c47af27050cb4064b4d68d2219'}]}, 'timestamp': '2025-12-02 10:10:16.221045', '_unique_id': '4cadc3cf70fe47168a040646145d0ac6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.222 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.222 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74e5ce43-8e94-443d-b3c7-166cec3b6c93', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.222804', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '19577ac8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': 'f365758e86ee1318a75ce5baeec0f085071b2ee4c8a512948bed260afbfd5229'}]}, 'timestamp': '2025-12-02 10:10:16.223149', '_unique_id': 'ed5c709234a44f759c3167f7cf26c4f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.223 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.224 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.224 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3bc7d9d6-d016-47a4-8c33-d5f58031cd3e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.224981', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '1957cfb4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': '216c95df45552602df9ea3fc209e156b2ffad9c4017290baad6c6ea97fc07ead'}]}, 'timestamp': '2025-12-02 10:10:16.225324', '_unique_id': '3fa9a0b3c7484aa59269f5467c3efb70'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2661f453-9c28-4642-938f-5e830e2bc664', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:10:16.227047', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '19582072-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12378.357573517, 'message_signature': 'b52d7ca92e2c7b415a71ee804bfe3dc7a16b8c996554489109173ec874d30510'}]}, 'timestamp': '2025-12-02 10:10:16.227393', '_unique_id': '219f45f592b7440395ece93256e1d984'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:10:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:10:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:10:16 localhost podman[325889]: Dec 2 05:10:16 localhost podman[325889]: 2025-12-02 10:10:16.997827267 +0000 UTC m=+0.115258394 container create 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:17 localhost podman[325889]: 2025-12-02 10:10:16.913518039 +0000 UTC m=+0.030949176 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:17 localhost systemd[1]: Started libpod-conmon-9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c.scope. Dec 2 05:10:17 localhost systemd[1]: Started libcrun container. Dec 2 05:10:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b13ae10e11528f1ef963d551ebcf95dfcf3157f75d67b00cb1cbdcb5fd574f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:17 localhost podman[325889]: 2025-12-02 10:10:17.068285634 +0000 UTC m=+0.185716761 container init 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:17 localhost dnsmasq[325907]: started, version 2.85 cachesize 150 Dec 2 05:10:17 localhost dnsmasq[325907]: DNS service limited to local subnets Dec 2 05:10:17 localhost podman[325889]: 2025-12-02 10:10:17.078877276 +0000 UTC m=+0.196308443 container start 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:10:17 localhost dnsmasq[325907]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:17 localhost dnsmasq[325907]: warning: no upstream servers configured Dec 2 05:10:17 localhost dnsmasq-dhcp[325907]: DHCP, static leases only on 10.101.0.0, lease time 1d Dec 2 05:10:17 localhost dnsmasq[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/addn_hosts - 0 addresses Dec 2 05:10:17 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/host Dec 2 05:10:17 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/opts Dec 2 05:10:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:17.242 263406 INFO neutron.agent.dhcp.agent [None req-19ca4539-dc74-4ccb-8f45-f7f7345f2b18 - - - - - -] DHCP configuration for ports {'38d8cd53-d8f6-48a6-8f6b-757990efd71c'} is completed#033[00m Dec 2 05:10:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:18.134 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:17Z, description=, device_id=ccc84569-b123-40e7-b4b7-ca02e8eac496, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=70c458f3-a908-4c47-aabc-6babc0f38a51, ip_allocation=immediate, mac_address=fa:16:3e:37:d6:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:13Z, description=, dns_domain=, id=39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-517243776, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41799, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2424, status=ACTIVE, subnets=['62baa615-eade-454c-b324-6bcc3b621528'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:10:14Z, vlan_transparent=None, network_id=39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2453, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:10:17Z on network 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07#033[00m Dec 2 05:10:18 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:18.245 2 INFO neutron.agent.securitygroups_rpc [None req-46c6c36b-2a47-445c-9836-d1e79e5b14a9 8d2b383649fa45f2821f6e290127374a 84fd536b8b4d489f944ed3e4bbfaeb5b - - default default] Security group rule updated ['d6dcbb7b-b610-4062-87d4-37eec03c1ecf']#033[00m Dec 2 05:10:18 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:18 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:18 localhost dnsmasq[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/addn_hosts - 1 addresses Dec 2 05:10:18 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/host Dec 2 05:10:18 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/opts Dec 2 05:10:18 localhost podman[325926]: 2025-12-02 10:10:18.406779834 +0000 UTC m=+0.072552914 container kill 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:10:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:18.754 263406 INFO neutron.agent.dhcp.agent [None req-c78c9559-dad7-4d8d-bd13-089df71025af - - - - - -] DHCP configuration for ports {'70c458f3-a908-4c47-aabc-6babc0f38a51'} is completed#033[00m Dec 2 05:10:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:10:19 localhost systemd[1]: tmp-crun.T30d33.mount: Deactivated successfully. Dec 2 05:10:19 localhost podman[325948]: 2025-12-02 10:10:19.459283032 +0000 UTC m=+0.094948761 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:10:19 localhost podman[325948]: 2025-12-02 10:10:19.526256507 +0000 UTC m=+0.161922196 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm) Dec 2 05:10:19 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:10:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:20.233 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:17Z, description=, device_id=ccc84569-b123-40e7-b4b7-ca02e8eac496, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=70c458f3-a908-4c47-aabc-6babc0f38a51, ip_allocation=immediate, mac_address=fa:16:3e:37:d6:0e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:13Z, description=, dns_domain=, id=39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-517243776, port_security_enabled=True, project_id=7dffef2e74844a7ebb6ee68826fb7e57, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=41799, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2424, status=ACTIVE, subnets=['62baa615-eade-454c-b324-6bcc3b621528'], tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:10:14Z, vlan_transparent=None, network_id=39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, port_security_enabled=False, project_id=7dffef2e74844a7ebb6ee68826fb7e57, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2453, status=DOWN, tags=[], tenant_id=7dffef2e74844a7ebb6ee68826fb7e57, updated_at=2025-12-02T10:10:17Z on network 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07#033[00m Dec 2 05:10:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:20.292 2 INFO neutron.agent.securitygroups_rpc [None req-0a3dbc5e-28c6-4790-aae5-682551e66674 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:20 localhost dnsmasq[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/addn_hosts - 1 addresses Dec 2 05:10:20 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/host Dec 2 05:10:20 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/opts Dec 2 05:10:20 localhost podman[325982]: 2025-12-02 10:10:20.473193932 +0000 UTC m=+0.070758726 container kill 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:10:20 localhost systemd[1]: tmp-crun.HYftgK.mount: Deactivated successfully. Dec 2 05:10:20 localhost nova_compute[281854]: 2025-12-02 10:10:20.700 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:21 localhost nova_compute[281854]: 2025-12-02 10:10:21.039 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:21 localhost nova_compute[281854]: 2025-12-02 10:10:21.087 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.201 263406 INFO neutron.agent.dhcp.agent [None req-a59c5026-9011-4028-a90d-553fcad66b1b - - - - - -] DHCP configuration for ports {'70c458f3-a908-4c47-aabc-6babc0f38a51'} is completed#033[00m Dec 2 05:10:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent [None req-7847b949-fdf3-4943-985f-b94a01502df1 - - - - - -] Unable to enable dhcp for c9b0342d-5faa-4588-b4a8-0132ddc49133.: oslo_messaging.rpc.client.RemoteError: Remote error: MechanismDriverError Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n ret_val = f(_self, context, *args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n return self._port_action(plugin, context, port, \'create_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n return p_utils.create_port(plugin, context, port)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n return core_plugin.create_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1583, in create_port\n return self._after_create_port(context, result, mech_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1607, in _after_create_port\n self.delete_port(context, result[\'id\'], l3_port_check=False)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1602, in _after_create_port\n bound_context = self._bind_port_if_needed(mech_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 607, in _bind_port_if_needed\n self._commit_port_binding(context, bind_context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 861, in _commit_port_binding\n self.mechanism_manager.update_port_postcommit(cur_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/managers.py", line 764, in update_port_postcommit\n self._call_on_drivers("update_port_postcommit", context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/managers.py", line 513, in _call_on_drivers\n raise ml2_exc.MechanismDriverError(\n', 'neutron.plugins.ml2.common.exceptions.MechanismDriverError\n']. Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent common_utils.wait_until_true(self._enable, timeout=300) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent while not predicate(): Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent interface_name = self.device_manager.setup( Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent self.cleanup_stale_devices(network, dhcp_port=None) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent self.force_reraise() Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent raise self.value Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent port = self.setup_dhcp_port(network, segment) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent dhcp_port = setup_method(network, device_id, dhcp_subnets) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1703, in _setup_new_dhcp_port Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent return self.plugin.create_dhcp_port({'port': port_dict}) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 893, in create_dhcp_port Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent port = cctxt.call(self.context, 'create_dhcp_port', Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent return self._original_context.call(ctxt, method, **kwargs) Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent result = self.transport._send( Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent return self._driver.send(target, ctxt, message, Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent return self._send(target, ctxt, message, wait_for_reply, timeout, Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent raise result Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: MechanismDriverError Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n return func(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n ret_val = f(_self, context, *args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n return self._port_action(plugin, context, port, \'create_port\')\n', ' File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n return p_utils.create_port(plugin, context, port)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n return core_plugin.create_port(\n', ' File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n return f_with_retry(*args, **kwargs,\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1583, in create_port\n return self._after_create_port(context, result, mech_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1607, in _after_create_port\n self.delete_port(context, result[\'id\'], l3_port_check=False)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1602, in _after_create_port\n bound_context = self._bind_port_if_needed(mech_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n setattr(e, \'_RETRY_EXCEEDED\', True)\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n ectxt.value = e.inner_exc\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n context_reference.session.rollback()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n self.force_reraise()\n', ' File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n raise self.value\n', ' File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n return f(*dup_args, **dup_kwargs)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 607, in _bind_port_if_needed\n self._commit_port_binding(context, bind_context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 861, in _commit_port_binding\n self.mechanism_manager.update_port_postcommit(cur_context)\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/managers.py", line 764, in update_port_postcommit\n self._call_on_drivers("update_port_postcommit", context,\n', ' File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/managers.py", line 513, in _call_on_drivers\n raise ml2_exc.MechanismDriverError(\n', 'neutron.plugins.ml2.common.exceptions.MechanismDriverError\n']. Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.230 263406 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.232 263406 INFO neutron.agent.dhcp.agent [None req-df0a516e-2ba9-46d1-bdd1-2505dc3dca33 - - - - - -] Synchronizing state#033[00m Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.447 263406 INFO neutron.agent.dhcp.agent [None req-3717f4b1-77ba-425c-b108-011d16bbdeb6 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:10:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:21.448 263406 INFO neutron.agent.dhcp.agent [-] Starting network c9b0342d-5faa-4588-b4a8-0132ddc49133 dhcp configuration#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.332 263406 INFO neutron.agent.dhcp.agent [None req-b71ad753-eb9b-4d15-afb8-18fb045a258a - - - - - -] Finished network c9b0342d-5faa-4588-b4a8-0132ddc49133 dhcp configuration#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.333 263406 INFO neutron.agent.dhcp.agent [None req-3717f4b1-77ba-425c-b108-011d16bbdeb6 - - - - - -] Synchronizing state complete#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.334 263406 INFO neutron.agent.dhcp.agent [None req-3717f4b1-77ba-425c-b108-011d16bbdeb6 - - - - - -] Synchronizing state#033[00m Dec 2 05:10:22 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:22 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:22 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:22 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:10:22 localhost podman[326003]: 2025-12-02 10:10:22.462090545 +0000 UTC m=+0.087933484 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:10:22 localhost podman[326003]: 2025-12-02 10:10:22.472347899 +0000 UTC m=+0.098190838 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:10:22 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.678 263406 INFO neutron.agent.dhcp.agent [None req-2ad2cb68-84bf-4287-b213-7d22078cddbd - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.679 263406 INFO neutron.agent.dhcp.agent [-] Starting network c9b0342d-5faa-4588-b4a8-0132ddc49133 dhcp configuration#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.679 263406 INFO neutron.agent.dhcp.agent [-] Finished network c9b0342d-5faa-4588-b4a8-0132ddc49133 dhcp configuration#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.680 263406 INFO neutron.agent.dhcp.agent [None req-2ad2cb68-84bf-4287-b213-7d22078cddbd - - - - - -] Synchronizing state complete#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.680 263406 INFO neutron.agent.dhcp.agent [None req-7847b949-fdf3-4943-985f-b94a01502df1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.681 263406 INFO neutron.agent.dhcp.agent [None req-7847b949-fdf3-4943-985f-b94a01502df1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:22.916 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:24 localhost nova_compute[281854]: 2025-12-02 10:10:24.038 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:10:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:10:24 localhost podman[326021]: 2025-12-02 10:10:24.455100617 +0000 UTC m=+0.094890449 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, container_name=openstack_network_exporter) Dec 2 05:10:24 localhost podman[326022]: 2025-12-02 10:10:24.544289623 +0000 UTC m=+0.177519461 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:10:24 localhost podman[326021]: 2025-12-02 10:10:24.549193884 +0000 UTC m=+0.188983726 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Dec 2 05:10:24 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:10:24 localhost podman[326022]: 2025-12-02 10:10:24.603496702 +0000 UTC m=+0.236726500 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:10:24 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:10:24 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:24.933 2 INFO neutron.agent.securitygroups_rpc [None req-3ae6da31-b902-4566-8baa-11e094d2ee12 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:26 localhost nova_compute[281854]: 2025-12-02 10:10:26.042 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:26 localhost nova_compute[281854]: 2025-12-02 10:10:26.089 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:26 localhost podman[326171]: 2025-12-02 10:10:26.628438443 +0000 UTC m=+0.102064720 container exec 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, vcs-type=git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 2 05:10:26 localhost podman[326171]: 2025-12-02 10:10:26.813398603 +0000 UTC m=+0.287024870 container exec_died 990b8c741b7783c8fc872e091a073eaa8355db0c6a880b1d7d40e92d418ad467 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541913, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 05:10:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:27.539 263406 INFO neutron.agent.linux.ip_lib [None req-c37d69a8-486c-4e50-8e89-8e4f125d8676 - - - - - -] Device tap6305f6b8-f6 cannot be used as it has no MAC address#033[00m Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.573 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:27 localhost kernel: device tap6305f6b8-f6 entered promiscuous mode Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.585 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:27 localhost NetworkManager[5965]: [1764670227.5854] manager: (tap6305f6b8-f6): new Generic device (/org/freedesktop/NetworkManager/Devices/73) Dec 2 05:10:27 localhost ovn_controller[154505]: 2025-12-02T10:10:27Z|00452|binding|INFO|Claiming lport 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 for this chassis. Dec 2 05:10:27 localhost ovn_controller[154505]: 2025-12-02T10:10:27Z|00453|binding|INFO|6305f6b8-f6d1-42c8-8da0-74c67d8b4998: Claiming unknown Dec 2 05:10:27 localhost systemd-udevd[326323]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:27.597 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-ba9b74ca-c826-47d9-9b2c-806aa0652611', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba9b74ca-c826-47d9-9b2c-806aa0652611', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae8275bd-608b-4d44-bec9-32778c15dfb9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6305f6b8-f6d1-42c8-8da0-74c67d8b4998) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:27.598 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 in datapath ba9b74ca-c826-47d9-9b2c-806aa0652611 bound to our chassis#033[00m Dec 2 05:10:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:27.598 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ba9b74ca-c826-47d9-9b2c-806aa0652611 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:27 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:27.600 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4335b6b4-9a2d-4f5a-bd08-72b05566de47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.618 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost ovn_controller[154505]: 2025-12-02T10:10:27Z|00454|binding|INFO|Setting lport 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 ovn-installed in OVS Dec 2 05:10:27 localhost ovn_controller[154505]: 2025-12-02T10:10:27Z|00455|binding|INFO|Setting lport 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 up in Southbound Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.624 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost journal[230136]: ethtool ioctl error on tap6305f6b8-f6: No such device Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.675 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:27 localhost nova_compute[281854]: 2025-12-02 10:10:27.724 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 2 05:10:28 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:10:28 localhost podman[326435]: Dec 2 05:10:28 localhost podman[326435]: 2025-12-02 10:10:28.703157363 +0000 UTC m=+0.125010472 container create 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:28 localhost podman[326435]: 2025-12-02 10:10:28.628320359 +0000 UTC m=+0.050173488 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:28 localhost systemd[1]: Started libpod-conmon-5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf.scope. Dec 2 05:10:28 localhost systemd[1]: Started libcrun container. Dec 2 05:10:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846af1f535196c2b1f3fa28965eaa27fb6ef8e77dfc94da176dafd06a1387634/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:28 localhost podman[326435]: 2025-12-02 10:10:28.793974303 +0000 UTC m=+0.215827422 container init 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:10:28 localhost podman[326435]: 2025-12-02 10:10:28.803435126 +0000 UTC m=+0.225288235 container start 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:10:28 localhost dnsmasq[326471]: started, version 2.85 cachesize 150 Dec 2 05:10:28 localhost dnsmasq[326471]: DNS service limited to local subnets Dec 2 05:10:28 localhost dnsmasq[326471]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:28 localhost dnsmasq[326471]: warning: no upstream servers configured Dec 2 05:10:28 localhost dnsmasq-dhcp[326471]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:28 localhost dnsmasq[326471]: read /var/lib/neutron/dhcp/ba9b74ca-c826-47d9-9b2c-806aa0652611/addn_hosts - 0 addresses Dec 2 05:10:28 localhost dnsmasq-dhcp[326471]: read /var/lib/neutron/dhcp/ba9b74ca-c826-47d9-9b2c-806aa0652611/host Dec 2 05:10:28 localhost dnsmasq-dhcp[326471]: read /var/lib/neutron/dhcp/ba9b74ca-c826-47d9-9b2c-806aa0652611/opts Dec 2 05:10:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:29.026 263406 INFO neutron.agent.dhcp.agent [None req-07cc9c98-c19c-48d9-aa3f-f26314ee2cd6 - - - - - -] DHCP configuration for ports {'b38cddf9-e4fe-47e0-9ffd-5e33d69bbc25'} is completed#033[00m Dec 2 05:10:29 localhost nova_compute[281854]: 2025-12-02 10:10:29.059 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:29 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[298296]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M Dec 2 05:10:29 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[298296]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 2 05:10:29 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:30 localhost nova_compute[281854]: 2025-12-02 10:10:30.066 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:31 localhost nova_compute[281854]: 2025-12-02 10:10:31.046 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:31 localhost nova_compute[281854]: 2025-12-02 10:10:31.128 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:10:32 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:32.835 2 INFO neutron.agent.securitygroups_rpc [None req-b8b78442-432d-4c51-90c0-6b0763587b8d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1794fecb-60a8-41cc-838d-a48dc5474875']#033[00m Dec 2 05:10:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e156 e156: 6 total, 6 up, 6 in Dec 2 05:10:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:10:33 localhost podman[326472]: 2025-12-02 10:10:33.451783988 +0000 UTC m=+0.082470708 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3) Dec 2 05:10:33 localhost podman[326472]: 2025-12-02 10:10:33.490031648 +0000 UTC m=+0.120718368 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:33 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:10:34 localhost openstack_network_exporter[242845]: ERROR 10:10:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:34 localhost openstack_network_exporter[242845]: ERROR 10:10:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:10:34 localhost openstack_network_exporter[242845]: ERROR 10:10:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:10:34 localhost openstack_network_exporter[242845]: ERROR 10:10:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:10:34 localhost openstack_network_exporter[242845]: Dec 2 05:10:34 localhost openstack_network_exporter[242845]: ERROR 10:10:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:10:34 localhost openstack_network_exporter[242845]: Dec 2 05:10:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:34.609 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:34.611 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:10:34 localhost nova_compute[281854]: 2025-12-02 10:10:34.610 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:35 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:35.371 2 INFO neutron.agent.securitygroups_rpc [None req-d462b9e0-1fd6-4bb8-aa5c-65cdc1c34ce0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850', '1794fecb-60a8-41cc-838d-a48dc5474875']#033[00m Dec 2 05:10:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:35.997 263406 INFO neutron.agent.linux.ip_lib [None req-30ed4756-1c6a-4a9b-8207-822095a9eb63 - - - - - -] Device tap998458b3-d6 cannot be used as it has no MAC address#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.020 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost kernel: device tap998458b3-d6 entered promiscuous mode Dec 2 05:10:36 localhost NetworkManager[5965]: [1764670236.0297] manager: (tap998458b3-d6): new Generic device (/org/freedesktop/NetworkManager/Devices/74) Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00456|binding|INFO|Claiming lport 998458b3-d6cd-4ecd-850f-289ca92b1da7 for this chassis. Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00457|binding|INFO|998458b3-d6cd-4ecd-850f-289ca92b1da7: Claiming unknown Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.031 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost systemd-udevd[326501]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:36.038 2 INFO neutron.agent.securitygroups_rpc [None req-d3e51cbf-aa8e-4108-aa2d-8a899b386ca8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850']#033[00m Dec 2 05:10:36 localhost podman[240799]: time="2025-12-02T10:10:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00458|binding|INFO|Setting lport 998458b3-d6cd-4ecd-850f-289ca92b1da7 ovn-installed in OVS Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.066 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost podman[240799]: @ - - [02/Dec/2025:10:10:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159748 "" "Go-http-client/1.1" Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.104 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost journal[230136]: ethtool ioctl error on tap998458b3-d6: No such device Dec 2 05:10:36 localhost podman[240799]: @ - - [02/Dec/2025:10:10:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20212 "" "Go-http-client/1.1" Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.129 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.135 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:10:36 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:36 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:10:36 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:10:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.265 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-2880e5f6-e139-4f3f-a855-f230a91f9ae2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880e5f6-e139-4f3f-a855-f230a91f9ae2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4caf983d-dc6c-4268-9c5b-d4a14993b754, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=998458b3-d6cd-4ecd-850f-289ca92b1da7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00459|binding|INFO|Setting lport 998458b3-d6cd-4ecd-850f-289ca92b1da7 up in Southbound Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.270 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 998458b3-d6cd-4ecd-850f-289ca92b1da7 in datapath 2880e5f6-e139-4f3f-a855-f230a91f9ae2 bound to our chassis#033[00m Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.272 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2880e5f6-e139-4f3f-a855-f230a91f9ae2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.273 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ddeede58-ce2c-469c-a38e-c5bdbd28db48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:36.559 263406 INFO neutron.agent.linux.ip_lib [None req-e3d44190-c3fe-477f-b41e-ca7c586a4206 - - - - - -] Device tap79d1b462-4e cannot be used as it has no MAC address#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.640 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost kernel: device tap79d1b462-4e entered promiscuous mode Dec 2 05:10:36 localhost NetworkManager[5965]: [1764670236.6443] manager: (tap79d1b462-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/75) Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00460|binding|INFO|Claiming lport 79d1b462-4e0f-4b98-8dd4-56658187af29 for this chassis. Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00461|binding|INFO|79d1b462-4e0f-4b98-8dd4-56658187af29: Claiming unknown Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.646 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.658 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00462|binding|INFO|Setting lport 79d1b462-4e0f-4b98-8dd4-56658187af29 ovn-installed in OVS Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.660 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.678 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.701 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost nova_compute[281854]: 2025-12-02 10:10:36.718 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:36 localhost ovn_controller[154505]: 2025-12-02T10:10:36Z|00463|binding|INFO|Setting lport 79d1b462-4e0f-4b98-8dd4-56658187af29 up in Southbound Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.824 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0563b4b4-439a-4655-9225-28a24ad09db2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0563b4b4-439a-4655-9225-28a24ad09db2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f39b5ca1adf344dd9239d3d0131792d4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0e1d319-f053-4ffe-b337-109ee69f3933, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79d1b462-4e0f-4b98-8dd4-56658187af29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.826 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 79d1b462-4e0f-4b98-8dd4-56658187af29 in datapath 0563b4b4-439a-4655-9225-28a24ad09db2 bound to our chassis#033[00m Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.827 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0563b4b4-439a-4655-9225-28a24ad09db2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:36.828 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[17120cb9-902c-4ef5-be67-9188b275ca3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:37 localhost podman[326598]: Dec 2 05:10:37 localhost podman[326598]: 2025-12-02 10:10:37.022560975 +0000 UTC m=+0.087975616 container create c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:37 localhost systemd[1]: Started libpod-conmon-c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa.scope. Dec 2 05:10:37 localhost systemd[1]: Started libcrun container. Dec 2 05:10:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e3c6b4d7a0b177b297b4a6dc780781660327bdcf3b0f523d9debbe50858c72e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:37 localhost podman[326598]: 2025-12-02 10:10:36.991018014 +0000 UTC m=+0.056432695 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:37 localhost podman[326598]: 2025-12-02 10:10:37.09253281 +0000 UTC m=+0.157947451 container init c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:10:37 localhost podman[326598]: 2025-12-02 10:10:37.099170536 +0000 UTC m=+0.164585177 container start c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:10:37 localhost dnsmasq[326623]: started, version 2.85 cachesize 150 Dec 2 05:10:37 localhost dnsmasq[326623]: DNS service limited to local subnets Dec 2 05:10:37 localhost dnsmasq[326623]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:37 localhost dnsmasq[326623]: warning: no upstream servers configured Dec 2 05:10:37 localhost dnsmasq-dhcp[326623]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:37 localhost dnsmasq[326623]: read /var/lib/neutron/dhcp/2880e5f6-e139-4f3f-a855-f230a91f9ae2/addn_hosts - 0 addresses Dec 2 05:10:37 localhost dnsmasq-dhcp[326623]: read /var/lib/neutron/dhcp/2880e5f6-e139-4f3f-a855-f230a91f9ae2/host Dec 2 05:10:37 localhost dnsmasq-dhcp[326623]: read /var/lib/neutron/dhcp/2880e5f6-e139-4f3f-a855-f230a91f9ae2/opts Dec 2 05:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:10:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:37.445 263406 INFO neutron.agent.dhcp.agent [None req-8f45a370-4c5a-48bd-a761-104fd5b51ec1 - - - - - -] DHCP configuration for ports {'619e786c-c344-438b-b31b-289b481ea916'} is completed#033[00m Dec 2 05:10:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:10:37 localhost podman[326637]: 2025-12-02 10:10:37.505631878 +0000 UTC m=+0.145734054 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:10:37 localhost podman[326637]: 2025-12-02 10:10:37.518244145 +0000 UTC m=+0.158346321 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:10:37 localhost podman[326660]: Dec 2 05:10:37 localhost podman[326677]: 2025-12-02 10:10:37.591937778 +0000 UTC m=+0.081100662 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 05:10:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:37.612 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:10:37 localhost podman[326660]: 2025-12-02 10:10:37.523595967 +0000 UTC m=+0.088418387 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:37 localhost podman[326677]: 2025-12-02 10:10:37.641115478 +0000 UTC m=+0.130278352 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:10:37 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:10:37 localhost podman[326660]: 2025-12-02 10:10:37.67080975 +0000 UTC m=+0.235632100 container create 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:10:37 localhost dnsmasq[326623]: exiting on receipt of SIGTERM Dec 2 05:10:37 localhost podman[326722]: 2025-12-02 10:10:37.702288939 +0000 UTC m=+0.055604273 container kill c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:10:37 localhost systemd[1]: Started libpod-conmon-0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223.scope. Dec 2 05:10:37 localhost systemd[1]: libpod-c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa.scope: Deactivated successfully. Dec 2 05:10:37 localhost systemd[1]: Started libcrun container. Dec 2 05:10:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a8e6dcaae55003fe5ded5cdde675da092aada4d48b494b157ce714a6ab4dbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:37 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:10:37 localhost podman[326743]: 2025-12-02 10:10:37.766094119 +0000 UTC m=+0.044339912 container died c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:10:37 localhost podman[326660]: 2025-12-02 10:10:37.775052608 +0000 UTC m=+0.339875228 container init 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:10:37 localhost podman[326660]: 2025-12-02 10:10:37.784200272 +0000 UTC m=+0.349022602 container start 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:37 localhost dnsmasq[326766]: started, version 2.85 cachesize 150 Dec 2 05:10:37 localhost dnsmasq[326766]: DNS service limited to local subnets Dec 2 05:10:37 localhost dnsmasq[326766]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:37 localhost dnsmasq[326766]: warning: no upstream servers configured Dec 2 05:10:37 localhost dnsmasq-dhcp[326766]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:10:37 localhost dnsmasq[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/addn_hosts - 0 addresses Dec 2 05:10:37 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/host Dec 2 05:10:37 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/opts Dec 2 05:10:37 localhost podman[326743]: 2025-12-02 10:10:37.854057604 +0000 UTC m=+0.132303377 container remove c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2880e5f6-e139-4f3f-a855-f230a91f9ae2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:37 localhost systemd[1]: libpod-conmon-c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa.scope: Deactivated successfully. Dec 2 05:10:37 localhost kernel: device tap998458b3-d6 left promiscuous mode Dec 2 05:10:37 localhost ovn_controller[154505]: 2025-12-02T10:10:37Z|00464|binding|INFO|Releasing lport 998458b3-d6cd-4ecd-850f-289ca92b1da7 from this chassis (sb_readonly=0) Dec 2 05:10:37 localhost ovn_controller[154505]: 2025-12-02T10:10:37Z|00465|binding|INFO|Setting lport 998458b3-d6cd-4ecd-850f-289ca92b1da7 down in Southbound Dec 2 05:10:37 localhost nova_compute[281854]: 2025-12-02 10:10:37.894 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:37.915 263406 INFO neutron.agent.dhcp.agent [None req-42995013-df4c-4a2c-8059-f99844bc4895 - - - - - -] DHCP configuration for ports {'1257c5f2-830b-4520-89c5-ef86a571e196'} is completed#033[00m Dec 2 05:10:37 localhost nova_compute[281854]: 2025-12-02 10:10:37.921 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:37 localhost nova_compute[281854]: 2025-12-02 10:10:37.923 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:38 localhost systemd[1]: var-lib-containers-storage-overlay-0e3c6b4d7a0b177b297b4a6dc780781660327bdcf3b0f523d9debbe50858c72e-merged.mount: Deactivated successfully. Dec 2 05:10:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c28087d52b7261a89e3f0fa0146db9a3070cf621a5308f9abf66bc716b3078fa-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:38.070 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-2880e5f6-e139-4f3f-a855-f230a91f9ae2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2880e5f6-e139-4f3f-a855-f230a91f9ae2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4caf983d-dc6c-4268-9c5b-d4a14993b754, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=998458b3-d6cd-4ecd-850f-289ca92b1da7) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:38.072 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 998458b3-d6cd-4ecd-850f-289ca92b1da7 in datapath 2880e5f6-e139-4f3f-a855-f230a91f9ae2 unbound from our chassis#033[00m Dec 2 05:10:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:38.074 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2880e5f6-e139-4f3f-a855-f230a91f9ae2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:38.074 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1318bcc3-7b23-4545-b43c-c917dad167ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:38 localhost systemd[1]: run-netns-qdhcp\x2d2880e5f6\x2de139\x2d4f3f\x2da855\x2df230a91f9ae2.mount: Deactivated successfully. Dec 2 05:10:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:38.091 263406 INFO neutron.agent.dhcp.agent [None req-bfd7d0b5-41be-47d8-9379-fcbf85a32212 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:38.133 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:38 localhost ovn_controller[154505]: 2025-12-02T10:10:38Z|00466|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:10:39 localhost nova_compute[281854]: 2025-12-02 10:10:39.058 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:39 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:39.563 2 INFO neutron.agent.securitygroups_rpc [None req-b887e192-fbe8-4997-9fd8-8fe0e62f2ad3 ffc28dac62f4495c9452fce17050d09a 16ae7f5f159c4b10a1539c2d9b52fce5 - - default default] Security group rule updated ['2409236f-431b-4039-840f-bb40e7858355']#033[00m Dec 2 05:10:39 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:39 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:41 localhost nova_compute[281854]: 2025-12-02 10:10:41.075 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:41 localhost nova_compute[281854]: 2025-12-02 10:10:41.130 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e157 e157: 6 total, 6 up, 6 in Dec 2 05:10:42 localhost nova_compute[281854]: 2025-12-02 10:10:42.194 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.555160) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242555241, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2428, "num_deletes": 253, "total_data_size": 4173874, "memory_usage": 4352720, "flush_reason": "Manual Compaction"} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Dec 2 05:10:42 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e158 e158: 6 total, 6 up, 6 in Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242571360, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 2731233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25980, "largest_seqno": 28403, "table_properties": {"data_size": 2721883, "index_size": 5663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22659, "raw_average_key_size": 21, "raw_value_size": 2701971, "raw_average_value_size": 2610, "num_data_blocks": 241, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670111, "oldest_key_time": 1764670111, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 16247 microseconds, and 7234 cpu microseconds. Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.571415) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 2731233 bytes OK Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.571441) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.573691) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.573711) EVENT_LOG_v1 {"time_micros": 1764670242573705, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.573736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 4162457, prev total WAL file size 4162498, number of live WAL files 2. Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.574779) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(2667KB)], [45(14MB)] Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242574823, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 18099848, "oldest_snapshot_seqno": -1} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 12950 keys, 16965663 bytes, temperature: kUnknown Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242674088, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 16965663, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16893176, "index_size": 39042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 347789, "raw_average_key_size": 26, "raw_value_size": 16673922, "raw_average_value_size": 1287, "num_data_blocks": 1470, "num_entries": 12950, "num_filter_entries": 12950, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.674393) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 16965663 bytes Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.676025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 170.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 14.7 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(12.8) write-amplify(6.2) OK, records in: 13490, records dropped: 540 output_compression: NoCompression Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.676055) EVENT_LOG_v1 {"time_micros": 1764670242676042, "job": 26, "event": "compaction_finished", "compaction_time_micros": 99354, "compaction_time_cpu_micros": 47518, "output_level": 6, "num_output_files": 1, "total_output_size": 16965663, "num_input_records": 13490, "num_output_records": 12950, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242676512, "job": 26, "event": "table_file_deletion", "file_number": 47} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242678999, "job": 26, "event": "table_file_deletion", "file_number": 45} Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.574680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.679096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.679102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.679105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.679108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:42 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:10:42.679111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:10:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:44.894 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:42Z, description=, device_id=d7b55be3-df8b-4a7a-a053-0a870505d24f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=965bd720-26a0-4a5b-8f5d-737e33ccfa28, ip_allocation=immediate, mac_address=fa:16:3e:5f:f6:d1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:32Z, description=, dns_domain=, id=0563b4b4-439a-4655-9225-28a24ad09db2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2025865746-network, port_security_enabled=True, project_id=f39b5ca1adf344dd9239d3d0131792d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22033, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2548, status=ACTIVE, subnets=['17eb0bb5-c5da-450e-9a0e-764bbc851d5d'], tags=[], tenant_id=f39b5ca1adf344dd9239d3d0131792d4, updated_at=2025-12-02T10:10:34Z, vlan_transparent=None, network_id=0563b4b4-439a-4655-9225-28a24ad09db2, port_security_enabled=False, project_id=f39b5ca1adf344dd9239d3d0131792d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2583, status=DOWN, tags=[], tenant_id=f39b5ca1adf344dd9239d3d0131792d4, updated_at=2025-12-02T10:10:42Z on network 0563b4b4-439a-4655-9225-28a24ad09db2#033[00m Dec 2 05:10:45 localhost podman[326784]: 2025-12-02 10:10:45.733030506 +0000 UTC m=+0.060976956 container kill 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 2 05:10:45 localhost dnsmasq[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/addn_hosts - 1 addresses Dec 2 05:10:45 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/host Dec 2 05:10:45 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/opts Dec 2 05:10:46 localhost nova_compute[281854]: 2025-12-02 10:10:46.074 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:46 localhost nova_compute[281854]: 2025-12-02 10:10:46.132 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:46 localhost nova_compute[281854]: 2025-12-02 10:10:46.310 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:46.347 263406 INFO neutron.agent.dhcp.agent [None req-93478d3f-813b-4385-8c0e-1773ffb1ec40 - - - - - -] DHCP configuration for ports {'965bd720-26a0-4a5b-8f5d-737e33ccfa28'} is completed#033[00m Dec 2 05:10:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e159 e159: 6 total, 6 up, 6 in Dec 2 05:10:46 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:46 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:47 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e160 e160: 6 total, 6 up, 6 in Dec 2 05:10:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:47.829 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:42Z, description=, device_id=d7b55be3-df8b-4a7a-a053-0a870505d24f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=965bd720-26a0-4a5b-8f5d-737e33ccfa28, ip_allocation=immediate, mac_address=fa:16:3e:5f:f6:d1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:32Z, description=, dns_domain=, id=0563b4b4-439a-4655-9225-28a24ad09db2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2025865746-network, port_security_enabled=True, project_id=f39b5ca1adf344dd9239d3d0131792d4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22033, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2548, status=ACTIVE, subnets=['17eb0bb5-c5da-450e-9a0e-764bbc851d5d'], tags=[], tenant_id=f39b5ca1adf344dd9239d3d0131792d4, updated_at=2025-12-02T10:10:34Z, vlan_transparent=None, network_id=0563b4b4-439a-4655-9225-28a24ad09db2, port_security_enabled=False, project_id=f39b5ca1adf344dd9239d3d0131792d4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2583, status=DOWN, tags=[], tenant_id=f39b5ca1adf344dd9239d3d0131792d4, updated_at=2025-12-02T10:10:42Z on network 0563b4b4-439a-4655-9225-28a24ad09db2#033[00m Dec 2 05:10:48 localhost podman[326820]: 2025-12-02 10:10:48.373741897 +0000 UTC m=+0.058966892 container kill 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:48 localhost dnsmasq[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/addn_hosts - 1 addresses Dec 2 05:10:48 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/host Dec 2 05:10:48 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/opts Dec 2 05:10:48 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e161 e161: 6 total, 6 up, 6 in Dec 2 05:10:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:48.764 263406 INFO neutron.agent.dhcp.agent [None req-ce3a0d39-3ba8-47e1-a6b6-7758e74847cd - - - - - -] DHCP configuration for ports {'965bd720-26a0-4a5b-8f5d-737e33ccfa28'} is completed#033[00m Dec 2 05:10:48 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:48.917 2 INFO neutron.agent.securitygroups_rpc [None req-c5737a18-0087-499e-be42-7eb006bdc7a0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd']#033[00m Dec 2 05:10:48 localhost snmpd[69635]: empty variable list in _query Dec 2 05:10:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:10:50 localhost systemd[1]: tmp-crun.VFUnNN.mount: Deactivated successfully. Dec 2 05:10:50 localhost podman[326841]: 2025-12-02 10:10:50.455877694 +0000 UTC m=+0.095182257 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute) Dec 2 05:10:50 localhost podman[326841]: 2025-12-02 10:10:50.468314835 +0000 UTC m=+0.107619368 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:10:50 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:10:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e162 e162: 6 total, 6 up, 6 in Dec 2 05:10:50 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:10:50 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:50 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:10:50 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:10:51 localhost nova_compute[281854]: 2025-12-02 10:10:51.078 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:51 localhost nova_compute[281854]: 2025-12-02 10:10:51.134 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e163 e163: 6 total, 6 up, 6 in Dec 2 05:10:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:10:53 localhost systemd[1]: tmp-crun.cckbJE.mount: Deactivated successfully. Dec 2 05:10:53 localhost podman[326860]: 2025-12-02 10:10:53.450434887 +0000 UTC m=+0.093348299 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:10:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:53.457 2 INFO neutron.agent.securitygroups_rpc [None req-f5ab8c31-1c32-4d96-896a-5e2487d3e658 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd', '475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']#033[00m Dec 2 05:10:53 localhost podman[326860]: 2025-12-02 10:10:53.458989905 +0000 UTC m=+0.101903327 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:10:53 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:10:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:10:53 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:10:53 localhost nova_compute[281854]: 2025-12-02 10:10:53.830 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:53.845 2 INFO neutron.agent.securitygroups_rpc [None req-054f4ed0-bf56-489e-9f2e-7d08aad333fe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']#033[00m Dec 2 05:10:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:10:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1771883914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:10:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:10:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1771883914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:10:54 localhost podman[326893]: 2025-12-02 10:10:54.602665022 +0000 UTC m=+0.064108739 container kill ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:10:54 localhost dnsmasq[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/addn_hosts - 0 addresses Dec 2 05:10:54 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/host Dec 2 05:10:54 localhost dnsmasq-dhcp[324773]: read /var/lib/neutron/dhcp/5f48cce7-247c-4b5d-8287-ac14f7453254/opts Dec 2 05:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:10:54 localhost podman[326906]: 2025-12-02 10:10:54.727113689 +0000 UTC m=+0.092021733 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41) Dec 2 05:10:54 localhost podman[326906]: 2025-12-02 10:10:54.743152926 +0000 UTC m=+0.108061030 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 2 05:10:54 localhost podman[326907]: 2025-12-02 10:10:54.7916898 +0000 UTC m=+0.149017843 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:10:54 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:10:54 localhost ovn_controller[154505]: 2025-12-02T10:10:54Z|00467|binding|INFO|Releasing lport b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 from this chassis (sb_readonly=0) Dec 2 05:10:54 localhost nova_compute[281854]: 2025-12-02 10:10:54.816 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:54 localhost ovn_controller[154505]: 2025-12-02T10:10:54Z|00468|binding|INFO|Setting lport b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 down in Southbound Dec 2 05:10:54 localhost kernel: device tapb35a7019-cd left promiscuous mode Dec 2 05:10:54 localhost podman[326907]: 2025-12-02 10:10:54.823031195 +0000 UTC m=+0.180359198 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:10:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:54.826 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-5f48cce7-247c-4b5d-8287-ac14f7453254', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f48cce7-247c-4b5d-8287-ac14f7453254', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bad680c763640dba71a7865b355817c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=beadeea7-0616-4ea7-b4f9-7f4239a4c055, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b35a7019-cd13-49ba-ae0b-aa70d3ce3b27) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:54.828 160221 INFO neutron.agent.ovn.metadata.agent [-] Port b35a7019-cd13-49ba-ae0b-aa70d3ce3b27 in datapath 5f48cce7-247c-4b5d-8287-ac14f7453254 unbound from our chassis#033[00m Dec 2 05:10:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:54.832 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f48cce7-247c-4b5d-8287-ac14f7453254, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:10:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:54.834 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5556c035-5726-4174-a7c5-a53f76a7d8d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:54 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:10:54 localhost nova_compute[281854]: 2025-12-02 10:10:54.841 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:54 localhost nova_compute[281854]: 2025-12-02 10:10:54.842 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:55.400 263406 INFO neutron.agent.linux.ip_lib [None req-b4c5dd66-fb5c-4d1d-819f-f64986343de9 - - - - - -] Device tapf3b02d29-75 cannot be used as it has no MAC address#033[00m Dec 2 05:10:55 localhost nova_compute[281854]: 2025-12-02 10:10:55.461 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:55 localhost kernel: device tapf3b02d29-75 entered promiscuous mode Dec 2 05:10:55 localhost NetworkManager[5965]: [1764670255.4715] manager: (tapf3b02d29-75): new Generic device (/org/freedesktop/NetworkManager/Devices/76) Dec 2 05:10:55 localhost ovn_controller[154505]: 2025-12-02T10:10:55Z|00469|binding|INFO|Claiming lport f3b02d29-7542-44d0-a991-a01ec607868c for this chassis. Dec 2 05:10:55 localhost ovn_controller[154505]: 2025-12-02T10:10:55Z|00470|binding|INFO|f3b02d29-7542-44d0-a991-a01ec607868c: Claiming unknown Dec 2 05:10:55 localhost nova_compute[281854]: 2025-12-02 10:10:55.473 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:55 localhost systemd-udevd[326968]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:10:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:55.481 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-d46489f9-a47b-465f-b68c-fdf4256b1786', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d46489f9-a47b-465f-b68c-fdf4256b1786', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c09a5d01-8e4d-42c4-b32a-41401f5c5328, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3b02d29-7542-44d0-a991-a01ec607868c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:10:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:55.483 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f3b02d29-7542-44d0-a991-a01ec607868c in datapath d46489f9-a47b-465f-b68c-fdf4256b1786 bound to our chassis#033[00m Dec 2 05:10:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:55.484 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d46489f9-a47b-465f-b68c-fdf4256b1786 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:10:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:10:55.484 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a5f33d-b37a-4949-a3f8-6970bda5fdb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost ovn_controller[154505]: 2025-12-02T10:10:55Z|00471|binding|INFO|Setting lport f3b02d29-7542-44d0-a991-a01ec607868c ovn-installed in OVS Dec 2 05:10:55 localhost ovn_controller[154505]: 2025-12-02T10:10:55Z|00472|binding|INFO|Setting lport f3b02d29-7542-44d0-a991-a01ec607868c up in Southbound Dec 2 05:10:55 localhost nova_compute[281854]: 2025-12-02 10:10:55.515 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost journal[230136]: ethtool ioctl error on tapf3b02d29-75: No such device Dec 2 05:10:55 localhost nova_compute[281854]: 2025-12-02 10:10:55.553 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:55 localhost nova_compute[281854]: 2025-12-02 10:10:55.580 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:56 localhost nova_compute[281854]: 2025-12-02 10:10:56.081 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:56 localhost nova_compute[281854]: 2025-12-02 10:10:56.136 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:10:56 localhost podman[327039]: Dec 2 05:10:56 localhost podman[327039]: 2025-12-02 10:10:56.433442331 +0000 UTC m=+0.090232496 container create 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:56 localhost systemd[1]: Started libpod-conmon-804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01.scope. Dec 2 05:10:56 localhost podman[327039]: 2025-12-02 10:10:56.387832345 +0000 UTC m=+0.044622510 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:56 localhost systemd[1]: Started libcrun container. Dec 2 05:10:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e164 e164: 6 total, 6 up, 6 in Dec 2 05:10:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ccbfaaa8ec4c595ccd6601936f664b8c09718d9046a5213589c5746d0d8e14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:56 localhost podman[327039]: 2025-12-02 10:10:56.532795678 +0000 UTC m=+0.189585873 container init 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:10:56 localhost podman[327039]: 2025-12-02 10:10:56.548881147 +0000 UTC m=+0.205671322 container start 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:10:56 localhost dnsmasq[327058]: started, version 2.85 cachesize 150 Dec 2 05:10:56 localhost dnsmasq[327058]: DNS service limited to local subnets Dec 2 05:10:56 localhost dnsmasq[327058]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:56 localhost dnsmasq[327058]: warning: no upstream servers configured Dec 2 05:10:56 localhost dnsmasq-dhcp[327058]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:56 localhost dnsmasq[327058]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/addn_hosts - 0 addresses Dec 2 05:10:56 localhost dnsmasq-dhcp[327058]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/host Dec 2 05:10:56 localhost dnsmasq-dhcp[327058]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/opts Dec 2 05:10:56 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:56.725 263406 INFO neutron.agent.dhcp.agent [None req-f35455d8-d91f-4960-bdad-e090c82bb305 - - - - - -] DHCP configuration for ports {'7df0cc5b-0d4a-48fd-ae58-4a75ac1c28cb'} is completed#033[00m Dec 2 05:10:56 localhost dnsmasq[327058]: exiting on receipt of SIGTERM Dec 2 05:10:56 localhost podman[327076]: 2025-12-02 10:10:56.932371837 +0000 UTC m=+0.064211223 container kill 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:10:56 localhost systemd[1]: libpod-804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01.scope: Deactivated successfully. Dec 2 05:10:57 localhost podman[327090]: 2025-12-02 10:10:57.011810863 +0000 UTC m=+0.059665820 container died 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:57 localhost systemd[1]: var-lib-containers-storage-overlay-79ccbfaaa8ec4c595ccd6601936f664b8c09718d9046a5213589c5746d0d8e14-merged.mount: Deactivated successfully. Dec 2 05:10:57 localhost podman[327090]: 2025-12-02 10:10:57.066018489 +0000 UTC m=+0.113873416 container remove 804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:10:57 localhost systemd[1]: libpod-conmon-804b260e4a7ae9950c52ed2ac004e40a0edd83e1847320032f14759403604a01.scope: Deactivated successfully. Dec 2 05:10:57 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:10:57 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:57 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:10:57 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:10:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:10:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1722968383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:10:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:10:57 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1722968383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:10:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e165 e165: 6 total, 6 up, 6 in Dec 2 05:10:57 localhost ovn_controller[154505]: 2025-12-02T10:10:57Z|00473|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:10:57 localhost nova_compute[281854]: 2025-12-02 10:10:57.948 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:58 localhost ovn_controller[154505]: 2025-12-02T10:10:58Z|00474|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:10:58 localhost nova_compute[281854]: 2025-12-02 10:10:58.538 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:10:58 localhost podman[327167]: Dec 2 05:10:58 localhost podman[327167]: 2025-12-02 10:10:58.788814249 +0000 UTC m=+0.078663048 container create 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:10:58 localhost nova_compute[281854]: 2025-12-02 10:10:58.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:58 localhost nova_compute[281854]: 2025-12-02 10:10:58.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:10:58 localhost systemd[1]: Started libpod-conmon-429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b.scope. Dec 2 05:10:58 localhost systemd[1]: tmp-crun.8bmdDT.mount: Deactivated successfully. Dec 2 05:10:58 localhost podman[327167]: 2025-12-02 10:10:58.746828401 +0000 UTC m=+0.036677240 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:10:58 localhost systemd[1]: Started libcrun container. Dec 2 05:10:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b03b76d06f065d064f053ee52b9bc33a267b704dd60e482f157855766fd952e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:10:58 localhost podman[327167]: 2025-12-02 10:10:58.869764366 +0000 UTC m=+0.159613165 container init 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:58 localhost podman[327167]: 2025-12-02 10:10:58.894074464 +0000 UTC m=+0.183923273 container start 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:10:58 localhost dnsmasq[327200]: started, version 2.85 cachesize 150 Dec 2 05:10:58 localhost dnsmasq[327200]: DNS service limited to local subnets Dec 2 05:10:58 localhost dnsmasq[327200]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:10:58 localhost dnsmasq[327200]: warning: no upstream servers configured Dec 2 05:10:58 localhost dnsmasq-dhcp[327200]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:10:58 localhost dnsmasq-dhcp[327200]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:10:58 localhost dnsmasq[327200]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/addn_hosts - 0 addresses Dec 2 05:10:58 localhost dnsmasq-dhcp[327200]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/host Dec 2 05:10:58 localhost dnsmasq-dhcp[327200]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/opts Dec 2 05:10:58 localhost neutron_sriov_agent[256494]: 2025-12-02 10:10:58.928 2 INFO neutron.agent.securitygroups_rpc [None req-84a55dcc-5035-483d-9948-fd4c09f198da 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']#033[00m Dec 2 05:10:59 localhost dnsmasq[324773]: exiting on receipt of SIGTERM Dec 2 05:10:59 localhost podman[327202]: 2025-12-02 10:10:59.018363696 +0000 UTC m=+0.049906550 container kill ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:10:59 localhost systemd[1]: libpod-ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829.scope: Deactivated successfully. Dec 2 05:10:59 localhost podman[327216]: 2025-12-02 10:10:59.094939477 +0000 UTC m=+0.059845386 container died ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:10:59 localhost podman[327216]: 2025-12-02 10:10:59.127145035 +0000 UTC m=+0.092050894 container cleanup ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:10:59 localhost systemd[1]: libpod-conmon-ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829.scope: Deactivated successfully. Dec 2 05:10:59 localhost podman[327217]: 2025-12-02 10:10:59.180933448 +0000 UTC m=+0.141741758 container remove ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5f48cce7-247c-4b5d-8287-ac14f7453254, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:10:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:59.195 263406 INFO neutron.agent.dhcp.agent [None req-5b768297-a8ca-412b-8751-d245a5ad872e - - - - - -] DHCP configuration for ports {'7df0cc5b-0d4a-48fd-ae58-4a75ac1c28cb', 'f3b02d29-7542-44d0-a991-a01ec607868c'} is completed#033[00m Dec 2 05:10:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:59.224 263406 INFO neutron.agent.dhcp.agent [None req-9f4ce02d-8b26-44d8-86a0-6a469b69a03e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:10:59.225 263406 INFO neutron.agent.dhcp.agent [None req-9f4ce02d-8b26-44d8-86a0-6a469b69a03e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:10:59 localhost systemd[1]: tmp-crun.Zz9J8Y.mount: Deactivated successfully. Dec 2 05:10:59 localhost systemd[1]: var-lib-containers-storage-overlay-615057096d3c152d4d1f671a5bd383cdd6ec5d4ac33268b4ace59e5fc7761d1b-merged.mount: Deactivated successfully. Dec 2 05:10:59 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef222e4c070887caa7adf3962619919fb8d0edaea9899146270f130a7f3ca829-userdata-shm.mount: Deactivated successfully. Dec 2 05:10:59 localhost systemd[1]: run-netns-qdhcp\x2d5f48cce7\x2d247c\x2d4b5d\x2d8287\x2dac14f7453254.mount: Deactivated successfully. Dec 2 05:10:59 localhost nova_compute[281854]: 2025-12-02 10:10:59.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:10:59 localhost nova_compute[281854]: 2025-12-02 10:10:59.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:10:59 localhost nova_compute[281854]: 2025-12-02 10:10:59.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:11:00 localhost nova_compute[281854]: 2025-12-02 10:11:00.010 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:11:00 localhost nova_compute[281854]: 2025-12-02 10:11:00.011 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:11:00 localhost nova_compute[281854]: 2025-12-02 10:11:00.011 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:11:00 localhost nova_compute[281854]: 2025-12-02 10:11:00.011 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:11:00 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:00 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.084 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.137 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.773 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.812 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.812 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.813 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.835 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.835 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.836 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.836 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:11:01 localhost nova_compute[281854]: 2025-12-02 10:11:01.836 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:11:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:11:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3426892413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.304 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.373 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.374 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:11:02 localhost ovn_controller[154505]: 2025-12-02T10:11:02Z|00475|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.602 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.625 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.627 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11199MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.627 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.628 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.716 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.716 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.716 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:11:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:11:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/383165306' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:11:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:11:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/383165306' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:11:02 localhost nova_compute[281854]: 2025-12-02 10:11:02.790 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.054 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.054 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.055 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:11:03 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2275295108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.221 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.228 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.251 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.254 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.254 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:11:03 localhost dnsmasq[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/addn_hosts - 0 addresses Dec 2 05:11:03 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/host Dec 2 05:11:03 localhost dnsmasq-dhcp[325907]: read /var/lib/neutron/dhcp/39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07/opts Dec 2 05:11:03 localhost podman[327306]: 2025-12-02 10:11:03.306341047 +0000 UTC m=+0.054776990 container kill 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.491 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:03 localhost ovn_controller[154505]: 2025-12-02T10:11:03Z|00476|binding|INFO|Releasing lport 32986807-4a62-4af8-ad03-9336f56fbec0 from this chassis (sb_readonly=0) Dec 2 05:11:03 localhost ovn_controller[154505]: 2025-12-02T10:11:03Z|00477|binding|INFO|Setting lport 32986807-4a62-4af8-ad03-9336f56fbec0 down in Southbound Dec 2 05:11:03 localhost kernel: device tap32986807-4a left promiscuous mode Dec 2 05:11:03 localhost nova_compute[281854]: 2025-12-02 10:11:03.511 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.634 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7dffef2e74844a7ebb6ee68826fb7e57', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d25b6f5f-b086-4558-a0fb-fc54d0ecba34, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=32986807-4a62-4af8-ad03-9336f56fbec0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.636 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 32986807-4a62-4af8-ad03-9336f56fbec0 in datapath 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07 unbound from our chassis#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.639 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:03.641 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e720ea-1510-4332-860f-3a5d13ecffa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:04 localhost openstack_network_exporter[242845]: ERROR 10:11:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:04 localhost openstack_network_exporter[242845]: ERROR 10:11:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:04 localhost openstack_network_exporter[242845]: ERROR 10:11:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:11:04 localhost openstack_network_exporter[242845]: ERROR 10:11:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:11:04 localhost openstack_network_exporter[242845]: Dec 2 05:11:04 localhost openstack_network_exporter[242845]: ERROR 10:11:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:11:04 localhost openstack_network_exporter[242845]: Dec 2 05:11:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:11:04 localhost podman[327332]: 2025-12-02 10:11:04.448087634 +0000 UTC m=+0.077617290 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:04 localhost podman[327332]: 2025-12-02 10:11:04.461214073 +0000 UTC m=+0.090743759 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:04 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:11:04 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 2 05:11:04 localhost dnsmasq[325907]: exiting on receipt of SIGTERM Dec 2 05:11:04 localhost podman[327361]: 2025-12-02 10:11:04.58002794 +0000 UTC m=+0.065025084 container kill 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:11:04 localhost systemd[1]: libpod-9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c.scope: Deactivated successfully. Dec 2 05:11:04 localhost podman[327374]: 2025-12-02 10:11:04.662331963 +0000 UTC m=+0.063187724 container died 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:04 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:04 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:04 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:04 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:04 localhost podman[327374]: 2025-12-02 10:11:04.703008008 +0000 UTC m=+0.103863699 container cleanup 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:04 localhost systemd[1]: libpod-conmon-9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c.scope: Deactivated successfully. Dec 2 05:11:04 localhost podman[327375]: 2025-12-02 10:11:04.729989656 +0000 UTC m=+0.127443677 container remove 9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39b95b79-8fd6-45a1-b6ac-c6ee2cb0dc07, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:04.756 263406 INFO neutron.agent.dhcp.agent [None req-3890ab30-d1db-4803-bd8c-1221ebad6a83 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:04.988 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:05 localhost ovn_controller[154505]: 2025-12-02T10:11:05Z|00478|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:05 localhost nova_compute[281854]: 2025-12-02 10:11:05.204 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:05 localhost nova_compute[281854]: 2025-12-02 10:11:05.269 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:05 localhost systemd[1]: var-lib-containers-storage-overlay-47b13ae10e11528f1ef963d551ebcf95dfcf3157f75d67b00cb1cbdcb5fd574f-merged.mount: Deactivated successfully. Dec 2 05:11:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e7cbac2b42ae8161add606bed32cbd0f4b194a5a786c9dccdd0e6a6796ef82c-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:05 localhost systemd[1]: run-netns-qdhcp\x2d39b95b79\x2d8fd6\x2d45a1\x2db6ac\x2dc6ee2cb0dc07.mount: Deactivated successfully. Dec 2 05:11:05 localhost nova_compute[281854]: 2025-12-02 10:11:05.822 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:05 localhost nova_compute[281854]: 2025-12-02 10:11:05.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:06 localhost podman[240799]: time="2025-12-02T10:11:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:11:06 localhost nova_compute[281854]: 2025-12-02 10:11:06.090 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:06 localhost podman[240799]: @ - - [02/Dec/2025:10:11:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159845 "" "Go-http-client/1.1" Dec 2 05:11:06 localhost nova_compute[281854]: 2025-12-02 10:11:06.138 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:06 localhost podman[240799]: @ - - [02/Dec/2025:10:11:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20214 "" "Go-http-client/1.1" Dec 2 05:11:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e166 e166: 6 total, 6 up, 6 in Dec 2 05:11:06 localhost nova_compute[281854]: 2025-12-02 10:11:06.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e167 e167: 6 total, 6 up, 6 in Dec 2 05:11:07 localhost nova_compute[281854]: 2025-12-02 10:11:07.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:11:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:11:08 localhost podman[327403]: 2025-12-02 10:11:08.473253711 +0000 UTC m=+0.083892587 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:11:08 localhost podman[327403]: 2025-12-02 10:11:08.481324775 +0000 UTC m=+0.091963641 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:11:08 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:11:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:11:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4217679944' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:11:08 localhost podman[327404]: 2025-12-02 10:11:08.539349421 +0000 UTC m=+0.148210360 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:11:08 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:08 localhost podman[327404]: 2025-12-02 10:11:08.588576374 +0000 UTC m=+0.197437263 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:08 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:11:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e168 e168: 6 total, 6 up, 6 in Dec 2 05:11:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:10.483 263406 INFO neutron.agent.linux.ip_lib [None req-cd73f1c7-b3e6-4c3a-83ee-0455e30c70d9 - - - - - -] Device tapa910a553-85 cannot be used as it has no MAC address#033[00m Dec 2 05:11:10 localhost nova_compute[281854]: 2025-12-02 10:11:10.552 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost kernel: device tapa910a553-85 entered promiscuous mode Dec 2 05:11:10 localhost NetworkManager[5965]: [1764670270.5605] manager: (tapa910a553-85): new Generic device (/org/freedesktop/NetworkManager/Devices/77) Dec 2 05:11:10 localhost nova_compute[281854]: 2025-12-02 10:11:10.562 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost ovn_controller[154505]: 2025-12-02T10:11:10Z|00479|binding|INFO|Claiming lport a910a553-85b1-4284-b87b-d67a0455f7a3 for this chassis. Dec 2 05:11:10 localhost ovn_controller[154505]: 2025-12-02T10:11:10Z|00480|binding|INFO|a910a553-85b1-4284-b87b-d67a0455f7a3: Claiming unknown Dec 2 05:11:10 localhost systemd-udevd[327461]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost ovn_controller[154505]: 2025-12-02T10:11:10Z|00481|binding|INFO|Setting lport a910a553-85b1-4284-b87b-d67a0455f7a3 ovn-installed in OVS Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost nova_compute[281854]: 2025-12-02 10:11:10.602 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e169 e169: 6 total, 6 up, 6 in Dec 2 05:11:10 localhost journal[230136]: ethtool ioctl error on tapa910a553-85: No such device Dec 2 05:11:10 localhost nova_compute[281854]: 2025-12-02 10:11:10.641 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost nova_compute[281854]: 2025-12-02 10:11:10.678 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:10 localhost ovn_controller[154505]: 2025-12-02T10:11:10Z|00482|binding|INFO|Setting lport a910a553-85b1-4284-b87b-d67a0455f7a3 up in Southbound Dec 2 05:11:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:10.959 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30cf73fc-9798-4d84-a408-4d3ceadffb42, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a910a553-85b1-4284-b87b-d67a0455f7a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:10.961 160221 INFO neutron.agent.ovn.metadata.agent [-] Port a910a553-85b1-4284-b87b-d67a0455f7a3 in datapath b2aacd19-6fe6-44f4-8d3d-5e657d287b5b bound to our chassis#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:10.963 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b2aacd19-6fe6-44f4-8d3d-5e657d287b5b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:10.964 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[490a8496-1c48-4f63-9f9f-549e5afe87a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:11 localhost nova_compute[281854]: 2025-12-02 10:11:11.090 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:11 localhost nova_compute[281854]: 2025-12-02 10:11:11.140 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:11 localhost podman[327532]: Dec 2 05:11:11 localhost podman[327532]: 2025-12-02 10:11:11.566244296 +0000 UTC m=+0.101286201 container create ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:11:11 localhost systemd[1]: Started libpod-conmon-ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647.scope. Dec 2 05:11:11 localhost podman[327532]: 2025-12-02 10:11:11.516458869 +0000 UTC m=+0.051500824 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:11 localhost systemd[1]: Started libcrun container. Dec 2 05:11:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1165e8d7b2376bb929e0428192dfe49a187f4a8ad512dcdd1d564a09da96bcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:11 localhost podman[327532]: 2025-12-02 10:11:11.641787269 +0000 UTC m=+0.176829184 container init ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e170 e170: 6 total, 6 up, 6 in Dec 2 05:11:11 localhost podman[327532]: 2025-12-02 10:11:11.654644341 +0000 UTC m=+0.189686256 container start ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:11:11 localhost dnsmasq[327550]: started, version 2.85 cachesize 150 Dec 2 05:11:11 localhost dnsmasq[327550]: DNS service limited to local subnets Dec 2 05:11:11 localhost dnsmasq[327550]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:11 localhost dnsmasq[327550]: warning: no upstream servers configured Dec 2 05:11:11 localhost dnsmasq-dhcp[327550]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:11:11 localhost dnsmasq[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/addn_hosts - 0 addresses Dec 2 05:11:11 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/host Dec 2 05:11:11 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/opts Dec 2 05:11:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:11.937 263406 INFO neutron.agent.dhcp.agent [None req-2c291abe-bd54-4429-9cd1-9696b4ea9169 - - - - - -] DHCP configuration for ports {'8fbe0409-bc61-4d2a-8af9-ace603962489'} is completed#033[00m Dec 2 05:11:12 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:12 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:12 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:12 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:11:12 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e171 e171: 6 total, 6 up, 6 in Dec 2 05:11:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:14.138 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:13Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5f55fbe7-c9d6-4e59-88a0-e8f30657e90b, ip_allocation=immediate, mac_address=fa:16:3e:f2:41:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:04Z, description=, dns_domain=, id=b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1013659582, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=24948, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2632, status=ACTIVE, subnets=['82575f4f-e89e-4800-8beb-d49b38e1570e'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:08Z, vlan_transparent=None, network_id=b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2649, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:13Z on network b2aacd19-6fe6-44f4-8d3d-5e657d287b5b#033[00m Dec 2 05:11:14 localhost dnsmasq[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/addn_hosts - 1 addresses Dec 2 05:11:14 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/host Dec 2 05:11:14 localhost podman[327569]: 2025-12-02 10:11:14.361814625 +0000 UTC m=+0.065541178 container kill ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:11:14 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/opts Dec 2 05:11:14 localhost systemd[1]: tmp-crun.36Gsyd.mount: Deactivated successfully. Dec 2 05:11:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:14.615 263406 INFO neutron.agent.dhcp.agent [None req-0c49c027-4db8-413b-9998-88181ac76b53 - - - - - -] DHCP configuration for ports {'5f55fbe7-c9d6-4e59-88a0-e8f30657e90b'} is completed#033[00m Dec 2 05:11:14 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e172 e172: 6 total, 6 up, 6 in Dec 2 05:11:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:16 localhost nova_compute[281854]: 2025-12-02 10:11:16.105 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:16 localhost nova_compute[281854]: 2025-12-02 10:11:16.142 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e173 e173: 6 total, 6 up, 6 in Dec 2 05:11:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e174 e174: 6 total, 6 up, 6 in Dec 2 05:11:18 localhost ovn_controller[154505]: 2025-12-02T10:11:18Z|00483|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:18 localhost nova_compute[281854]: 2025-12-02 10:11:18.049 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:18.537 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:13Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5f55fbe7-c9d6-4e59-88a0-e8f30657e90b, ip_allocation=immediate, mac_address=fa:16:3e:f2:41:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:04Z, description=, dns_domain=, id=b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1013659582, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=24948, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2632, status=ACTIVE, subnets=['82575f4f-e89e-4800-8beb-d49b38e1570e'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:08Z, vlan_transparent=None, network_id=b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2649, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:13Z on network b2aacd19-6fe6-44f4-8d3d-5e657d287b5b#033[00m Dec 2 05:11:18 localhost dnsmasq[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/addn_hosts - 1 addresses Dec 2 05:11:18 localhost podman[327608]: 2025-12-02 10:11:18.730051896 +0000 UTC m=+0.056118126 container kill ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:18 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/host Dec 2 05:11:18 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/opts Dec 2 05:11:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:19.880 263406 INFO neutron.agent.dhcp.agent [None req-8ed8ad51-32e2-4ae0-ac77-d9dc9cd94d64 - - - - - -] DHCP configuration for ports {'5f55fbe7-c9d6-4e59-88a0-e8f30657e90b'} is completed#033[00m Dec 2 05:11:20 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:20 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:20 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:11:20 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:11:21 localhost nova_compute[281854]: 2025-12-02 10:11:21.141 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:21 localhost nova_compute[281854]: 2025-12-02 10:11:21.145 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:11:21 localhost podman[327630]: 2025-12-02 10:11:21.451904499 +0000 UTC m=+0.092001782 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:11:21 localhost podman[327630]: 2025-12-02 10:11:21.488703201 +0000 UTC m=+0.128800484 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 2 05:11:21 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:11:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e175 e175: 6 total, 6 up, 6 in Dec 2 05:11:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:11:24 localhost systemd[1]: tmp-crun.xkkSdH.mount: Deactivated successfully. Dec 2 05:11:24 localhost podman[327649]: 2025-12-02 10:11:24.448910165 +0000 UTC m=+0.088772406 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:11:24 localhost podman[327649]: 2025-12-02 10:11:24.452764588 +0000 UTC m=+0.092626849 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 2 05:11:24 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:11:25 localhost systemd[1]: tmp-crun.vfLQy5.mount: Deactivated successfully. Dec 2 05:11:25 localhost podman[327668]: 2025-12-02 10:11:25.445835162 +0000 UTC m=+0.080407854 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, version=9.6, io.openshift.expose-services=) Dec 2 05:11:25 localhost podman[327668]: 2025-12-02 10:11:25.458034668 +0000 UTC m=+0.092607400 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible) Dec 2 05:11:25 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:11:25 localhost podman[327669]: 2025-12-02 10:11:25.552447533 +0000 UTC m=+0.183699045 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:11:25 localhost podman[327669]: 2025-12-02 10:11:25.607187363 +0000 UTC m=+0.238438955 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:11:25 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:11:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:26 localhost nova_compute[281854]: 2025-12-02 10:11:26.145 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e176 e176: 6 total, 6 up, 6 in Dec 2 05:11:28 localhost dnsmasq[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/addn_hosts - 0 addresses Dec 2 05:11:28 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/host Dec 2 05:11:28 localhost podman[327728]: 2025-12-02 10:11:28.228630912 +0000 UTC m=+0.067169611 container kill 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:11:28 localhost dnsmasq-dhcp[326766]: read /var/lib/neutron/dhcp/0563b4b4-439a-4655-9225-28a24ad09db2/opts Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:30 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:11:31 localhost nova_compute[281854]: 2025-12-02 10:11:31.148 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:31 localhost ovn_controller[154505]: 2025-12-02T10:11:31Z|00484|binding|INFO|Releasing lport 79d1b462-4e0f-4b98-8dd4-56658187af29 from this chassis (sb_readonly=0) Dec 2 05:11:31 localhost nova_compute[281854]: 2025-12-02 10:11:31.265 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:31 localhost ovn_controller[154505]: 2025-12-02T10:11:31Z|00485|binding|INFO|Setting lport 79d1b462-4e0f-4b98-8dd4-56658187af29 down in Southbound Dec 2 05:11:31 localhost kernel: device tap79d1b462-4e left promiscuous mode Dec 2 05:11:31 localhost nova_compute[281854]: 2025-12-02 10:11:31.288 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:31 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:31 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:31 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:31 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:31 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:11:31 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:31.713 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0563b4b4-439a-4655-9225-28a24ad09db2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0563b4b4-439a-4655-9225-28a24ad09db2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f39b5ca1adf344dd9239d3d0131792d4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0e1d319-f053-4ffe-b337-109ee69f3933, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=79d1b462-4e0f-4b98-8dd4-56658187af29) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:31 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:31.715 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 79d1b462-4e0f-4b98-8dd4-56658187af29 in datapath 0563b4b4-439a-4655-9225-28a24ad09db2 unbound from our chassis#033[00m Dec 2 05:11:31 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:31.718 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0563b4b4-439a-4655-9225-28a24ad09db2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:31 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:31.719 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[30bff567-3fc5-4832-a3ce-646901b9ee41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:11:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:33.000 263406 INFO neutron.agent.linux.ip_lib [None req-8c8c37d5-6503-49fe-bbd7-4b9d93fab532 - - - - - -] Device tapf7cbdc65-9f cannot be used as it has no MAC address#033[00m Dec 2 05:11:33 localhost nova_compute[281854]: 2025-12-02 10:11:33.018 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost kernel: device tapf7cbdc65-9f entered promiscuous mode Dec 2 05:11:33 localhost NetworkManager[5965]: [1764670293.0243] manager: (tapf7cbdc65-9f): new Generic device (/org/freedesktop/NetworkManager/Devices/78) Dec 2 05:11:33 localhost nova_compute[281854]: 2025-12-02 10:11:33.025 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost ovn_controller[154505]: 2025-12-02T10:11:33Z|00486|binding|INFO|Claiming lport f7cbdc65-9f74-447e-81d0-f5b9eb66518d for this chassis. Dec 2 05:11:33 localhost ovn_controller[154505]: 2025-12-02T10:11:33Z|00487|binding|INFO|f7cbdc65-9f74-447e-81d0-f5b9eb66518d: Claiming unknown Dec 2 05:11:33 localhost systemd-udevd[327903]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost ovn_controller[154505]: 2025-12-02T10:11:33Z|00488|binding|INFO|Setting lport f7cbdc65-9f74-447e-81d0-f5b9eb66518d ovn-installed in OVS Dec 2 05:11:33 localhost ovn_controller[154505]: 2025-12-02T10:11:33Z|00489|binding|INFO|Setting lport f7cbdc65-9f74-447e-81d0-f5b9eb66518d up in Southbound Dec 2 05:11:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:33.064 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49e0ab2-43b0-4f40-9f61-82b8dfd7e5a9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7cbdc65-9f74-447e-81d0-f5b9eb66518d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:33 localhost nova_compute[281854]: 2025-12-02 10:11:33.064 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:33.067 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f7cbdc65-9f74-447e-81d0-f5b9eb66518d in datapath 90303572-56ca-4145-bbbf-5a391d217194 bound to our chassis#033[00m Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:33.069 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90303572-56ca-4145-bbbf-5a391d217194 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:33.070 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b2025a03-1813-4dd6-baae-0ae98086b88e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost journal[230136]: ethtool ioctl error on tapf7cbdc65-9f: No such device Dec 2 05:11:33 localhost nova_compute[281854]: 2025-12-02 10:11:33.108 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost nova_compute[281854]: 2025-12-02 10:11:33.131 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:33 localhost podman[327974]: Dec 2 05:11:33 localhost podman[327974]: 2025-12-02 10:11:33.970315982 +0000 UTC m=+0.090783140 container create 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:34 localhost systemd[1]: Started libpod-conmon-1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702.scope. Dec 2 05:11:34 localhost systemd[1]: Started libcrun container. Dec 2 05:11:34 localhost podman[327974]: 2025-12-02 10:11:33.927284135 +0000 UTC m=+0.047751313 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2f932238cf32e54e614fd98285fb06ac8759ba677ce563d29c2fc806ecc63b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:34 localhost podman[327974]: 2025-12-02 10:11:34.043339088 +0000 UTC m=+0.163806236 container init 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:34 localhost openstack_network_exporter[242845]: ERROR 10:11:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:11:34 localhost openstack_network_exporter[242845]: ERROR 10:11:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:34 localhost openstack_network_exporter[242845]: ERROR 10:11:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:11:34 localhost openstack_network_exporter[242845]: ERROR 10:11:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:11:34 localhost openstack_network_exporter[242845]: Dec 2 05:11:34 localhost openstack_network_exporter[242845]: ERROR 10:11:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:11:34 localhost openstack_network_exporter[242845]: Dec 2 05:11:34 localhost dnsmasq[327992]: started, version 2.85 cachesize 150 Dec 2 05:11:34 localhost dnsmasq[327992]: DNS service limited to local subnets Dec 2 05:11:34 localhost dnsmasq[327992]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:34 localhost dnsmasq[327992]: warning: no upstream servers configured Dec 2 05:11:34 localhost dnsmasq-dhcp[327992]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:11:34 localhost podman[327974]: 2025-12-02 10:11:34.06369673 +0000 UTC m=+0.184163888 container start 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:34 localhost dnsmasq[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/addn_hosts - 0 addresses Dec 2 05:11:34 localhost dnsmasq-dhcp[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/host Dec 2 05:11:34 localhost dnsmasq-dhcp[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/opts Dec 2 05:11:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:34.583 263406 INFO neutron.agent.dhcp.agent [None req-6e9e48ae-6429-421e-a3d4-49ee2f59c72c - - - - - -] DHCP configuration for ports {'f4ccb325-8298-4165-bdc6-b71985047423'} is completed#033[00m Dec 2 05:11:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:34 localhost dnsmasq[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/addn_hosts - 0 addresses Dec 2 05:11:34 localhost dnsmasq-dhcp[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/host Dec 2 05:11:34 localhost dnsmasq-dhcp[327992]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/opts Dec 2 05:11:34 localhost podman[328010]: 2025-12-02 10:11:34.787101589 +0000 UTC m=+0.058599633 container kill 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:11:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:34.873 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:34 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:34.875 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:11:34 localhost nova_compute[281854]: 2025-12-02 10:11:34.908 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:34 localhost podman[328026]: 2025-12-02 10:11:34.985407603 +0000 UTC m=+0.116027213 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:11:34 localhost podman[328026]: 2025-12-02 10:11:34.999825758 +0000 UTC m=+0.130445398 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:35 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:11:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:35.114 263406 INFO neutron.agent.dhcp.agent [None req-3844f544-b253-4fa1-b91b-f1e84edc2253 - - - - - -] DHCP configuration for ports {'f4ccb325-8298-4165-bdc6-b71985047423', 'f7cbdc65-9f74-447e-81d0-f5b9eb66518d'} is completed#033[00m Dec 2 05:11:36 localhost podman[240799]: time="2025-12-02T10:11:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:11:36 localhost podman[240799]: @ - - [02/Dec/2025:10:11:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163479 "" "Go-http-client/1.1" Dec 2 05:11:36 localhost podman[240799]: @ - - [02/Dec/2025:10:11:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21159 "" "Go-http-client/1.1" Dec 2 05:11:36 localhost nova_compute[281854]: 2025-12-02 10:11:36.151 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:36 localhost nova_compute[281854]: 2025-12-02 10:11:36.155 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:36.691 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b8488798-9f51-452b-aaff-d402048f4009 with type ""#033[00m Dec 2 05:11:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:36.693 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49e0ab2-43b0-4f40-9f61-82b8dfd7e5a9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f7cbdc65-9f74-447e-81d0-f5b9eb66518d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:36 localhost ovn_controller[154505]: 2025-12-02T10:11:36Z|00490|binding|INFO|Removing iface tapf7cbdc65-9f ovn-installed in OVS Dec 2 05:11:36 localhost ovn_controller[154505]: 2025-12-02T10:11:36Z|00491|binding|INFO|Removing lport f7cbdc65-9f74-447e-81d0-f5b9eb66518d ovn-installed in OVS Dec 2 05:11:36 localhost nova_compute[281854]: 2025-12-02 10:11:36.694 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:36 localhost nova_compute[281854]: 2025-12-02 10:11:36.701 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:36.701 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f7cbdc65-9f74-447e-81d0-f5b9eb66518d in datapath 90303572-56ca-4145-bbbf-5a391d217194 unbound from our chassis#033[00m Dec 2 05:11:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:36.704 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 90303572-56ca-4145-bbbf-5a391d217194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:36 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:36.705 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c54158fd-2bd6-496b-9121-86970b51f208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:36 localhost podman[328064]: 2025-12-02 10:11:36.835122737 +0000 UTC m=+0.063887174 container kill 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:11:36 localhost dnsmasq[327992]: exiting on receipt of SIGTERM Dec 2 05:11:36 localhost systemd[1]: libpod-1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702.scope: Deactivated successfully. Dec 2 05:11:36 localhost podman[328077]: 2025-12-02 10:11:36.883171607 +0000 UTC m=+0.035493637 container died 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:11:36 localhost podman[328077]: 2025-12-02 10:11:36.972156179 +0000 UTC m=+0.124478169 container cleanup 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:11:36 localhost systemd[1]: libpod-conmon-1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702.scope: Deactivated successfully. Dec 2 05:11:36 localhost podman[328079]: 2025-12-02 10:11:36.992969643 +0000 UTC m=+0.135175283 container remove 1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:11:37 localhost nova_compute[281854]: 2025-12-02 10:11:37.009 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:37 localhost kernel: device tapf7cbdc65-9f left promiscuous mode Dec 2 05:11:37 localhost nova_compute[281854]: 2025-12-02 10:11:37.022 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:37 localhost ovn_controller[154505]: 2025-12-02T10:11:37Z|00492|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:37.045 263406 INFO neutron.agent.dhcp.agent [None req-2ad2cb68-84bf-4287-b213-7d22078cddbd - - - - - -] Synchronizing state#033[00m Dec 2 05:11:37 localhost nova_compute[281854]: 2025-12-02 10:11:37.110 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:37.302 263406 INFO neutron.agent.dhcp.agent [None req-ac6c4a74-9018-4989-9341-cd6278919b9d - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:11:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:37.304 263406 INFO neutron.agent.dhcp.agent [-] Starting network 90303572-56ca-4145-bbbf-5a391d217194 dhcp configuration#033[00m Dec 2 05:11:37 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:11:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:37 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:11:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:11:37 localhost systemd[1]: var-lib-containers-storage-overlay-6d2f932238cf32e54e614fd98285fb06ac8759ba677ce563d29c2fc806ecc63b-merged.mount: Deactivated successfully. Dec 2 05:11:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e20edf189cb064e5f2c290de7e1debaf866f456a33a65f179955b67eb930702-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:37 localhost systemd[1]: run-netns-qdhcp\x2d90303572\x2d56ca\x2d4145\x2dbbbf\x2d5a391d217194.mount: Deactivated successfully. Dec 2 05:11:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:37.876 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:11:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:38.061 263406 INFO neutron.agent.linux.ip_lib [None req-cd09deba-86fb-4071-8f27-829712929f3f - - - - - -] Device tap5ea828b9-62 cannot be used as it has no MAC address#033[00m Dec 2 05:11:38 localhost nova_compute[281854]: 2025-12-02 10:11:38.080 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:38 localhost kernel: device tap5ea828b9-62 entered promiscuous mode Dec 2 05:11:38 localhost NetworkManager[5965]: [1764670298.0900] manager: (tap5ea828b9-62): new Generic device (/org/freedesktop/NetworkManager/Devices/79) Dec 2 05:11:38 localhost ovn_controller[154505]: 2025-12-02T10:11:38Z|00493|binding|INFO|Claiming lport 5ea828b9-628f-47b3-a4f4-720d9ef822f9 for this chassis. Dec 2 05:11:38 localhost ovn_controller[154505]: 2025-12-02T10:11:38Z|00494|binding|INFO|5ea828b9-628f-47b3-a4f4-720d9ef822f9: Claiming unknown Dec 2 05:11:38 localhost nova_compute[281854]: 2025-12-02 10:11:38.090 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:38 localhost systemd-udevd[328118]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:38.101 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49e0ab2-43b0-4f40-9f61-82b8dfd7e5a9, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5ea828b9-628f-47b3-a4f4-720d9ef822f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:38.102 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea828b9-628f-47b3-a4f4-720d9ef822f9 in datapath 90303572-56ca-4145-bbbf-5a391d217194 bound to our chassis#033[00m Dec 2 05:11:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:38.103 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90303572-56ca-4145-bbbf-5a391d217194 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:38.106 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[eb942bc5-5fca-41a4-8bf2-1968850d97ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost ovn_controller[154505]: 2025-12-02T10:11:38Z|00495|binding|INFO|Setting lport 5ea828b9-628f-47b3-a4f4-720d9ef822f9 ovn-installed in OVS Dec 2 05:11:38 localhost ovn_controller[154505]: 2025-12-02T10:11:38Z|00496|binding|INFO|Setting lport 5ea828b9-628f-47b3-a4f4-720d9ef822f9 up in Southbound Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost journal[230136]: ethtool ioctl error on tap5ea828b9-62: No such device Dec 2 05:11:38 localhost nova_compute[281854]: 2025-12-02 10:11:38.169 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:38 localhost nova_compute[281854]: 2025-12-02 10:11:38.191 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:38 localhost podman[328189]: Dec 2 05:11:38 localhost podman[328189]: 2025-12-02 10:11:38.995198211 +0000 UTC m=+0.092139007 container create c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:39 localhost systemd[1]: Started libpod-conmon-c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136.scope. Dec 2 05:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:11:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:11:39 localhost systemd[1]: Started libcrun container. Dec 2 05:11:39 localhost podman[328189]: 2025-12-02 10:11:38.953251573 +0000 UTC m=+0.050192409 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08ea60918ddc50749a8d9387348c0bc4f40c098c37c29fe4981d5d7b2eca14f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:39 localhost podman[328189]: 2025-12-02 10:11:39.062504055 +0000 UTC m=+0.159444811 container init c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:11:39 localhost podman[328189]: 2025-12-02 10:11:39.069878751 +0000 UTC m=+0.166819507 container start c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:11:39 localhost dnsmasq[328223]: started, version 2.85 cachesize 150 Dec 2 05:11:39 localhost dnsmasq[328223]: DNS service limited to local subnets Dec 2 05:11:39 localhost dnsmasq[328223]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:39 localhost dnsmasq[328223]: warning: no upstream servers configured Dec 2 05:11:39 localhost dnsmasq-dhcp[328223]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:11:39 localhost dnsmasq[328223]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/addn_hosts - 0 addresses Dec 2 05:11:39 localhost dnsmasq-dhcp[328223]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/host Dec 2 05:11:39 localhost dnsmasq-dhcp[328223]: read /var/lib/neutron/dhcp/90303572-56ca-4145-bbbf-5a391d217194/opts Dec 2 05:11:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:39.110 263406 INFO neutron.agent.dhcp.agent [None req-cd09deba-86fb-4071-8f27-829712929f3f - - - - - -] Finished network 90303572-56ca-4145-bbbf-5a391d217194 dhcp configuration#033[00m Dec 2 05:11:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:39.110 263406 INFO neutron.agent.dhcp.agent [None req-ac6c4a74-9018-4989-9341-cd6278919b9d - - - - - -] Synchronizing state complete#033[00m Dec 2 05:11:39 localhost podman[328206]: 2025-12-02 10:11:39.131702529 +0000 UTC m=+0.077961799 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:39 localhost nova_compute[281854]: 2025-12-02 10:11:39.139 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:39 localhost kernel: device tap5ea828b9-62 left promiscuous mode Dec 2 05:11:39 localhost ovn_controller[154505]: 2025-12-02T10:11:39Z|00497|binding|INFO|Releasing lport 5ea828b9-628f-47b3-a4f4-720d9ef822f9 from this chassis (sb_readonly=0) Dec 2 05:11:39 localhost ovn_controller[154505]: 2025-12-02T10:11:39Z|00498|binding|INFO|Setting lport 5ea828b9-628f-47b3-a4f4-720d9ef822f9 down in Southbound Dec 2 05:11:39 localhost nova_compute[281854]: 2025-12-02 10:11:39.157 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:39 localhost podman[328205]: 2025-12-02 10:11:39.107124014 +0000 UTC m=+0.058845390 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:11:39 localhost podman[328206]: 2025-12-02 10:11:39.162929911 +0000 UTC m=+0.109189201 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 2 05:11:39 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:11:39 localhost podman[328205]: 2025-12-02 10:11:39.192995951 +0000 UTC m=+0.144717377 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:11:39 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:11:39 localhost dnsmasq[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/addn_hosts - 0 addresses Dec 2 05:11:39 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/host Dec 2 05:11:39 localhost podman[328274]: 2025-12-02 10:11:39.23681888 +0000 UTC m=+0.036980247 container kill ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:11:39 localhost dnsmasq-dhcp[327550]: read /var/lib/neutron/dhcp/b2aacd19-6fe6-44f4-8d3d-5e657d287b5b/opts Dec 2 05:11:39 localhost ovn_controller[154505]: 2025-12-02T10:11:39Z|00499|binding|INFO|Releasing lport a910a553-85b1-4284-b87b-d67a0455f7a3 from this chassis (sb_readonly=1) Dec 2 05:11:39 localhost ovn_controller[154505]: 2025-12-02T10:11:39Z|00500|if_status|INFO|Not setting lport a910a553-85b1-4284-b87b-d67a0455f7a3 down as sb is readonly Dec 2 05:11:39 localhost nova_compute[281854]: 2025-12-02 10:11:39.408 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:39 localhost kernel: device tapa910a553-85 left promiscuous mode Dec 2 05:11:39 localhost nova_compute[281854]: 2025-12-02 10:11:39.436 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:40 localhost systemd[1]: tmp-crun.zK1s6Y.mount: Deactivated successfully. Dec 2 05:11:40 localhost ovn_controller[154505]: 2025-12-02T10:11:40Z|00501|binding|INFO|Setting lport a910a553-85b1-4284-b87b-d67a0455f7a3 down in Southbound Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.252 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90303572-56ca-4145-bbbf-5a391d217194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49e0ab2-43b0-4f40-9f61-82b8dfd7e5a9, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5ea828b9-628f-47b3-a4f4-720d9ef822f9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.254 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5ea828b9-628f-47b3-a4f4-720d9ef822f9 in datapath 90303572-56ca-4145-bbbf-5a391d217194 unbound from our chassis#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.255 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90303572-56ca-4145-bbbf-5a391d217194 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.257 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[482a6bbb-9e98-42f7-b7ba-8ba009701658]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.262 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30cf73fc-9798-4d84-a408-4d3ceadffb42, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a910a553-85b1-4284-b87b-d67a0455f7a3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.264 160221 INFO neutron.agent.ovn.metadata.agent [-] Port a910a553-85b1-4284-b87b-d67a0455f7a3 in datapath b2aacd19-6fe6-44f4-8d3d-5e657d287b5b unbound from our chassis#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.266 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b2aacd19-6fe6-44f4-8d3d-5e657d287b5b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:40.266 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[6a37eb67-06dd-42e8-9ba7-673a445404e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:40 localhost podman[328325]: 2025-12-02 10:11:40.428833686 +0000 UTC m=+0.048375531 container kill 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:40 localhost systemd[1]: tmp-crun.6tAxkG.mount: Deactivated successfully. Dec 2 05:11:40 localhost dnsmasq[326766]: exiting on receipt of SIGTERM Dec 2 05:11:40 localhost systemd[1]: libpod-0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223.scope: Deactivated successfully. Dec 2 05:11:40 localhost dnsmasq[328223]: exiting on receipt of SIGTERM Dec 2 05:11:40 localhost podman[328327]: 2025-12-02 10:11:40.487293433 +0000 UTC m=+0.102965564 container kill c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:40 localhost systemd[1]: libpod-c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136.scope: Deactivated successfully. Dec 2 05:11:40 localhost podman[328351]: 2025-12-02 10:11:40.495201304 +0000 UTC m=+0.046406127 container died 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:11:40 localhost podman[328351]: 2025-12-02 10:11:40.537643545 +0000 UTC m=+0.088848338 container remove 0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0563b4b4-439a-4655-9225-28a24ad09db2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:11:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:40.586 263406 INFO neutron.agent.dhcp.agent [None req-fc3da859-f30f-424d-bef1-e46124cbf810 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:40.586 263406 INFO neutron.agent.dhcp.agent [None req-fc3da859-f30f-424d-bef1-e46124cbf810 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:40 localhost podman[328377]: 2025-12-02 10:11:40.602796682 +0000 UTC m=+0.103201641 container died c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:11:40 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 2 05:11:40 localhost dnsmasq[327550]: exiting on receipt of SIGTERM Dec 2 05:11:40 localhost podman[328413]: 2025-12-02 10:11:40.629832602 +0000 UTC m=+0.059421055 container kill ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:40 localhost systemd[1]: libpod-ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647.scope: Deactivated successfully. Dec 2 05:11:40 localhost podman[328377]: 2025-12-02 10:11:40.644085662 +0000 UTC m=+0.144490581 container cleanup c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:11:40 localhost systemd[1]: libpod-conmon-c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136.scope: Deactivated successfully. Dec 2 05:11:40 localhost ovn_controller[154505]: 2025-12-02T10:11:40Z|00502|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:40 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:40 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:40 localhost nova_compute[281854]: 2025-12-02 10:11:40.677 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:40 localhost podman[328381]: 2025-12-02 10:11:40.684380076 +0000 UTC m=+0.177824490 container remove c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-90303572-56ca-4145-bbbf-5a391d217194, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:11:40 localhost podman[328430]: 2025-12-02 10:11:40.707684527 +0000 UTC m=+0.060221916 container died ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:11:40 localhost podman[328430]: 2025-12-02 10:11:40.734515692 +0000 UTC m=+0.087053021 container cleanup ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:11:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:40.738 263406 INFO neutron.agent.dhcp.agent [None req-521d5c3a-4337-42a7-9ed9-dc9cd6428df5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:40 localhost systemd[1]: libpod-conmon-ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647.scope: Deactivated successfully. Dec 2 05:11:40 localhost systemd[1]: libpod-conmon-0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223.scope: Deactivated successfully. Dec 2 05:11:40 localhost podman[328434]: 2025-12-02 10:11:40.791534151 +0000 UTC m=+0.123942013 container remove ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2aacd19-6fe6-44f4-8d3d-5e657d287b5b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:41 localhost systemd[1]: tmp-crun.3UFmj3.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay-b08ea60918ddc50749a8d9387348c0bc4f40c098c37c29fe4981d5d7b2eca14f-merged.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c84887807090946374c8c196bfea5897079f0dee69d8cef578b2842bfe596136-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: run-netns-qdhcp\x2d90303572\x2d56ca\x2d4145\x2dbbbf\x2d5a391d217194.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay-c1165e8d7b2376bb929e0428192dfe49a187f4a8ad512dcdd1d564a09da96bcf-merged.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab9ff839cf1c09d4901c4f131e170b0874aba048a2e4983b6a6f180e2dc8c647-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay-78a8e6dcaae55003fe5ded5cdde675da092aada4d48b494b157ce714a6ab4dbb-merged.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0dce264616a8006789140bd8975bcae7ca02c0845b177dcf80b742142273a223-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:41 localhost systemd[1]: run-netns-qdhcp\x2d0563b4b4\x2d439a\x2d4655\x2d9225\x2d28a24ad09db2.mount: Deactivated successfully. Dec 2 05:11:41 localhost nova_compute[281854]: 2025-12-02 10:11:41.155 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:41.219 263406 INFO neutron.agent.dhcp.agent [None req-4b8070dc-60eb-4c17-b6b4-b747df66d33a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:41 localhost systemd[1]: run-netns-qdhcp\x2db2aacd19\x2d6fe6\x2d44f4\x2d8d3d\x2d5e657d287b5b.mount: Deactivated successfully. Dec 2 05:11:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e177 e177: 6 total, 6 up, 6 in Dec 2 05:11:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:42.771 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:44 localhost podman[328484]: 2025-12-02 10:11:44.702234967 +0000 UTC m=+0.069372061 container kill 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:11:44 localhost dnsmasq[327200]: exiting on receipt of SIGTERM Dec 2 05:11:44 localhost systemd[1]: libpod-429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b.scope: Deactivated successfully. Dec 2 05:11:44 localhost podman[328498]: 2025-12-02 10:11:44.782369923 +0000 UTC m=+0.062528508 container died 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:11:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:44 localhost podman[328498]: 2025-12-02 10:11:44.812260299 +0000 UTC m=+0.092418854 container cleanup 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:44 localhost systemd[1]: libpod-conmon-429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b.scope: Deactivated successfully. Dec 2 05:11:44 localhost podman[328500]: 2025-12-02 10:11:44.862324154 +0000 UTC m=+0.133839758 container remove 429fb6547ec1b5fd9ceffa4ef49e3d7aa860a5db27e9d9241621a6967c28233b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:45 localhost systemd[1]: var-lib-containers-storage-overlay-0b03b76d06f065d064f053ee52b9bc33a267b704dd60e482f157855766fd952e-merged.mount: Deactivated successfully. Dec 2 05:11:45 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:45 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:45 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:45 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e178 e178: 6 total, 6 up, 6 in Dec 2 05:11:45 localhost podman[328576]: Dec 2 05:11:45 localhost podman[328576]: 2025-12-02 10:11:45.834032909 +0000 UTC m=+0.110999930 container create 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:11:45 localhost podman[328576]: 2025-12-02 10:11:45.781819507 +0000 UTC m=+0.058786568 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:45 localhost systemd[1]: Started libpod-conmon-9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b.scope. Dec 2 05:11:45 localhost systemd[1]: Started libcrun container. Dec 2 05:11:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9c1c638ce6a7c0f3bc7617e0c55d1a035f83afacf2a942fd033f689340d0b8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:45 localhost podman[328576]: 2025-12-02 10:11:45.933788558 +0000 UTC m=+0.210755569 container init 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:11:45 localhost podman[328576]: 2025-12-02 10:11:45.943982269 +0000 UTC m=+0.220949280 container start 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:45 localhost dnsmasq[328594]: started, version 2.85 cachesize 150 Dec 2 05:11:45 localhost dnsmasq[328594]: DNS service limited to local subnets Dec 2 05:11:45 localhost dnsmasq[328594]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:45 localhost dnsmasq[328594]: warning: no upstream servers configured Dec 2 05:11:45 localhost dnsmasq-dhcp[328594]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:11:45 localhost dnsmasq[328594]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/addn_hosts - 0 addresses Dec 2 05:11:45 localhost dnsmasq-dhcp[328594]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/host Dec 2 05:11:45 localhost dnsmasq-dhcp[328594]: read /var/lib/neutron/dhcp/d46489f9-a47b-465f-b68c-fdf4256b1786/opts Dec 2 05:11:46 localhost nova_compute[281854]: 2025-12-02 10:11:46.159 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:46 localhost dnsmasq[328594]: exiting on receipt of SIGTERM Dec 2 05:11:46 localhost podman[328612]: 2025-12-02 10:11:46.550525973 +0000 UTC m=+0.059424605 container kill 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:11:46 localhost systemd[1]: libpod-9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b.scope: Deactivated successfully. Dec 2 05:11:46 localhost podman[328625]: 2025-12-02 10:11:46.628783528 +0000 UTC m=+0.063778191 container died 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:11:46 localhost podman[328625]: 2025-12-02 10:11:46.659712692 +0000 UTC m=+0.094707315 container cleanup 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:46 localhost systemd[1]: libpod-conmon-9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b.scope: Deactivated successfully. Dec 2 05:11:46 localhost systemd[1]: tmp-crun.NNbe5L.mount: Deactivated successfully. Dec 2 05:11:46 localhost systemd[1]: var-lib-containers-storage-overlay-b9c1c638ce6a7c0f3bc7617e0c55d1a035f83afacf2a942fd033f689340d0b8e-merged.mount: Deactivated successfully. Dec 2 05:11:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:46 localhost podman[328627]: 2025-12-02 10:11:46.712285904 +0000 UTC m=+0.137279470 container remove 9313fdd7c5777dd2e825836787a1a7760bab48abec3fcde00833cedffe975c0b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d46489f9-a47b-465f-b68c-fdf4256b1786, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:11:46 localhost kernel: device tapf3b02d29-75 left promiscuous mode Dec 2 05:11:46 localhost nova_compute[281854]: 2025-12-02 10:11:46.765 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost ovn_controller[154505]: 2025-12-02T10:11:46Z|00503|binding|INFO|Releasing lport f3b02d29-7542-44d0-a991-a01ec607868c from this chassis (sb_readonly=0) Dec 2 05:11:46 localhost ovn_controller[154505]: 2025-12-02T10:11:46Z|00504|binding|INFO|Setting lport f3b02d29-7542-44d0-a991-a01ec607868c down in Southbound Dec 2 05:11:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:46.788 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-d46489f9-a47b-465f-b68c-fdf4256b1786', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d46489f9-a47b-465f-b68c-fdf4256b1786', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c09a5d01-8e4d-42c4-b32a-41401f5c5328, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3b02d29-7542-44d0-a991-a01ec607868c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:46 localhost nova_compute[281854]: 2025-12-02 10:11:46.790 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:46.791 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f3b02d29-7542-44d0-a991-a01ec607868c in datapath d46489f9-a47b-465f-b68c-fdf4256b1786 unbound from our chassis#033[00m Dec 2 05:11:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e179 e179: 6 total, 6 up, 6 in Dec 2 05:11:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:46.795 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d46489f9-a47b-465f-b68c-fdf4256b1786, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:46 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:46.796 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[350b00bf-8bfb-4a68-a032-ba274e817456]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:47.008 263406 INFO neutron.agent.dhcp.agent [None req-ce89e2a6-018b-4a98-97ec-2e364b80374b - - - - - -] DHCP configuration for ports {'7df0cc5b-0d4a-48fd-ae58-4a75ac1c28cb', 'f3b02d29-7542-44d0-a991-a01ec607868c'} is completed#033[00m Dec 2 05:11:47 localhost systemd[1]: run-netns-qdhcp\x2dd46489f9\x2da47b\x2d465f\x2db68c\x2dfdf4256b1786.mount: Deactivated successfully. Dec 2 05:11:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:47.013 263406 INFO neutron.agent.dhcp.agent [None req-9bf69d65-f59c-4af9-9592-f185d6d7ae61 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:48.499 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:49 localhost ovn_controller[154505]: 2025-12-02T10:11:49Z|00505|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:49 localhost nova_compute[281854]: 2025-12-02 10:11:49.089 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:49 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e180 e180: 6 total, 6 up, 6 in Dec 2 05:11:50 localhost neutron_sriov_agent[256494]: 2025-12-02 10:11:50.334 2 INFO neutron.agent.securitygroups_rpc [None req-c7a943f3-fab0-4b36-9210-2f6cba57e1de defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']#033[00m Dec 2 05:11:51 localhost nova_compute[281854]: 2025-12-02 10:11:51.161 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:51 localhost nova_compute[281854]: 2025-12-02 10:11:51.164 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:11:52 localhost podman[328657]: 2025-12-02 10:11:52.46477202 +0000 UTC m=+0.105949955 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:11:52 localhost dnsmasq[326471]: exiting on receipt of SIGTERM Dec 2 05:11:52 localhost podman[328689]: 2025-12-02 10:11:52.562126364 +0000 UTC m=+0.065770544 container kill 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:11:52 localhost systemd[1]: libpod-5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf.scope: Deactivated successfully. Dec 2 05:11:52 localhost podman[328657]: 2025-12-02 10:11:52.59952454 +0000 UTC m=+0.240702475 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 2 05:11:52 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:11:52 localhost podman[328701]: 2025-12-02 10:11:52.66330644 +0000 UTC m=+0.079821558 container died 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:11:52 localhost systemd[1]: tmp-crun.wLVBru.mount: Deactivated successfully. Dec 2 05:11:52 localhost podman[328701]: 2025-12-02 10:11:52.698751745 +0000 UTC m=+0.115266763 container cleanup 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:11:52 localhost systemd[1]: libpod-conmon-5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf.scope: Deactivated successfully. Dec 2 05:11:52 localhost podman[328703]: 2025-12-02 10:11:52.738839823 +0000 UTC m=+0.146727131 container remove 5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ba9b74ca-c826-47d9-9b2c-806aa0652611, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:11:52 localhost ovn_controller[154505]: 2025-12-02T10:11:52Z|00506|binding|INFO|Releasing lport 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 from this chassis (sb_readonly=0) Dec 2 05:11:52 localhost kernel: device tap6305f6b8-f6 left promiscuous mode Dec 2 05:11:52 localhost nova_compute[281854]: 2025-12-02 10:11:52.752 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:52 localhost ovn_controller[154505]: 2025-12-02T10:11:52Z|00507|binding|INFO|Setting lport 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 down in Southbound Dec 2 05:11:52 localhost nova_compute[281854]: 2025-12-02 10:11:52.777 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:53 localhost systemd[1]: var-lib-containers-storage-overlay-846af1f535196c2b1f3fa28965eaa27fb6ef8e77dfc94da176dafd06a1387634-merged.mount: Deactivated successfully. Dec 2 05:11:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b8d876561dfa8754024d0e9b3073d0fb7a95f16e176162883c7000375290bcf-userdata-shm.mount: Deactivated successfully. Dec 2 05:11:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:53.529 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-ba9b74ca-c826-47d9-9b2c-806aa0652611', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba9b74ca-c826-47d9-9b2c-806aa0652611', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae8275bd-608b-4d44-bec9-32778c15dfb9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6305f6b8-f6d1-42c8-8da0-74c67d8b4998) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:53.531 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 6305f6b8-f6d1-42c8-8da0-74c67d8b4998 in datapath ba9b74ca-c826-47d9-9b2c-806aa0652611 unbound from our chassis#033[00m Dec 2 05:11:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:53.532 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ba9b74ca-c826-47d9-9b2c-806aa0652611 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:11:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:53.533 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8a3a47-6193-4ffb-a95c-5946d5589382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:53 localhost systemd[1]: run-netns-qdhcp\x2dba9b74ca\x2dc826\x2d47d9\x2d9b2c\x2d806aa0652611.mount: Deactivated successfully. Dec 2 05:11:53 localhost nova_compute[281854]: 2025-12-02 10:11:53.595 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 2 05:11:53 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 2 05:11:53 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 2 05:11:53 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:53.600 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:54.564 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:11:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e181 e181: 6 total, 6 up, 6 in Dec 2 05:11:54 localhost nova_compute[281854]: 2025-12-02 10:11:54.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:11:55 localhost podman[328731]: 2025-12-02 10:11:55.435098655 +0000 UTC m=+0.073400767 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 2 05:11:55 localhost podman[328731]: 2025-12-02 10:11:55.439979395 +0000 UTC m=+0.078281437 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 2 05:11:55 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:11:56 localhost nova_compute[281854]: 2025-12-02 10:11:56.164 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 2 05:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:11:56 localhost podman[328750]: 2025-12-02 10:11:56.445972143 +0000 UTC m=+0.082665953 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:11:56 localhost podman[328750]: 2025-12-02 10:11:56.465008301 +0000 UTC m=+0.101702051 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, architecture=x86_64) Dec 2 05:11:56 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:11:56 localhost podman[328751]: 2025-12-02 10:11:56.552644096 +0000 UTC m=+0.184899648 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:11:56 localhost podman[328751]: 2025-12-02 10:11:56.591092261 +0000 UTC m=+0.223347813 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:11:56 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:11:57 localhost ovn_controller[154505]: 2025-12-02T10:11:57Z|00508|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.098 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:57.434 263406 INFO neutron.agent.linux.ip_lib [None req-35d80d04-9627-4ced-9df7-fd9c39c88bcb - - - - - -] Device tap806e97f6-df cannot be used as it has no MAC address#033[00m Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.464 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost kernel: device tap806e97f6-df entered promiscuous mode Dec 2 05:11:57 localhost NetworkManager[5965]: [1764670317.4753] manager: (tap806e97f6-df): new Generic device (/org/freedesktop/NetworkManager/Devices/80) Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.477 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost ovn_controller[154505]: 2025-12-02T10:11:57Z|00509|binding|INFO|Claiming lport 806e97f6-df70-4fff-953f-74b4c641ea7b for this chassis. Dec 2 05:11:57 localhost ovn_controller[154505]: 2025-12-02T10:11:57Z|00510|binding|INFO|806e97f6-df70-4fff-953f-74b4c641ea7b: Claiming unknown Dec 2 05:11:57 localhost systemd-udevd[328802]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:11:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:57.499 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e6b81515-1a91-47bb-810b-f820ca0caeff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b81515-1a91-47bb-810b-f820ca0caeff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f863ae6-a602-4080-9383-f6b709828279, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=806e97f6-df70-4fff-953f-74b4c641ea7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:11:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:57.501 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 806e97f6-df70-4fff-953f-74b4c641ea7b in datapath e6b81515-1a91-47bb-810b-f820ca0caeff bound to our chassis#033[00m Dec 2 05:11:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:57.504 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8af3668a-fa20-4b45-b40c-82550ac031c7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:11:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:57.504 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6b81515-1a91-47bb-810b-f820ca0caeff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:11:57 localhost ovn_metadata_agent[160216]: 2025-12-02 10:11:57.505 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[83ee6d7a-d8a1-470e-b025-0d8ff1a5eefa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost ovn_controller[154505]: 2025-12-02T10:11:57Z|00511|binding|INFO|Setting lport 806e97f6-df70-4fff-953f-74b4c641ea7b ovn-installed in OVS Dec 2 05:11:57 localhost ovn_controller[154505]: 2025-12-02T10:11:57Z|00512|binding|INFO|Setting lport 806e97f6-df70-4fff-953f-74b4c641ea7b up in Southbound Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.527 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost journal[230136]: ethtool ioctl error on tap806e97f6-df: No such device Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.572 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost nova_compute[281854]: 2025-12-02 10:11:57.613 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:11:57 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:11:57 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:11:58 localhost podman[328873]: Dec 2 05:11:58 localhost podman[328873]: 2025-12-02 10:11:58.576734886 +0000 UTC m=+0.083102146 container create b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:11:58 localhost podman[328873]: 2025-12-02 10:11:58.526426025 +0000 UTC m=+0.032793305 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:11:58 localhost systemd[1]: Started libpod-conmon-b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a.scope. Dec 2 05:11:58 localhost systemd[1]: Started libcrun container. Dec 2 05:11:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbf59d4ae25454728b26aa2ee5b303a57d6e9d975e82bd83d233416387dbda34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:11:58 localhost podman[328873]: 2025-12-02 10:11:58.715525834 +0000 UTC m=+0.221893084 container init b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:11:58 localhost systemd[1]: tmp-crun.D0jU4k.mount: Deactivated successfully. Dec 2 05:11:58 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e182 e182: 6 total, 6 up, 6 in Dec 2 05:11:58 localhost podman[328873]: 2025-12-02 10:11:58.732479976 +0000 UTC m=+0.238847226 container start b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:11:58 localhost dnsmasq[328891]: started, version 2.85 cachesize 150 Dec 2 05:11:58 localhost dnsmasq[328891]: DNS service limited to local subnets Dec 2 05:11:58 localhost dnsmasq[328891]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:11:58 localhost dnsmasq[328891]: warning: no upstream servers configured Dec 2 05:11:58 localhost dnsmasq-dhcp[328891]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:11:58 localhost dnsmasq[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/addn_hosts - 0 addresses Dec 2 05:11:58 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/host Dec 2 05:11:58 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/opts Dec 2 05:11:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:11:58.952 263406 INFO neutron.agent.dhcp.agent [None req-6c230f37-30a9-46b5-b999-1e0f714361e6 - - - - - -] DHCP configuration for ports {'8c152302-afea-42ec-bf7f-b738e9bcaab0'} is completed#033[00m Dec 2 05:11:59 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e183 e183: 6 total, 6 up, 6 in Dec 2 05:11:59 localhost nova_compute[281854]: 2025-12-02 10:11:59.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:11:59 localhost nova_compute[281854]: 2025-12-02 10:11:59.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.913 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.914 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.914 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:12:00 localhost nova_compute[281854]: 2025-12-02 10:12:00.915 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.003 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:01 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:01.062 2 INFO neutron.agent.securitygroups_rpc [None req-bd3bdc93-0ba6-42a3-9063-ee94eddd1f8f defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.169 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.426 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.455 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.456 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.855 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.855 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.856 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.856 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:12:01 localhost nova_compute[281854]: 2025-12-02 10:12:01.857 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:12:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:01.885 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:01Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bcc45c26-5f02-46a1-869b-9af695c3ec53, ip_allocation=immediate, mac_address=fa:16:3e:5f:8e:4d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:46Z, description=, dns_domain=, id=e6b81515-1a91-47bb-810b-f820ca0caeff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--286242228, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25349, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2719, status=ACTIVE, subnets=['0c28764e-253b-48a2-be4d-b3892f027641'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:53Z, vlan_transparent=None, network_id=e6b81515-1a91-47bb-810b-f820ca0caeff, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2741, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:01Z on network e6b81515-1a91-47bb-810b-f820ca0caeff#033[00m Dec 2 05:12:02 localhost dnsmasq[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/addn_hosts - 1 addresses Dec 2 05:12:02 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/host Dec 2 05:12:02 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/opts Dec 2 05:12:02 localhost podman[328929]: 2025-12-02 10:12:02.138669116 +0000 UTC m=+0.067256793 container kill b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:12:02 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:02 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:02 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:02 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:12:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:12:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2146629484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.307 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:12:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e184 e184: 6 total, 6 up, 6 in Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.410 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.413 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:12:02 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:02.427 263406 INFO neutron.agent.dhcp.agent [None req-d45223e6-9c6d-4d82-bf42-af50435ba502 - - - - - -] DHCP configuration for ports {'bcc45c26-5f02-46a1-869b-9af695c3ec53'} is completed#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.649 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.650 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11177MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.651 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.651 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.737 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.737 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.738 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:12:02 localhost nova_compute[281854]: 2025-12-02 10:12:02.791 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:12:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:03.055 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:12:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:03.056 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:12:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:03.056 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:12:03 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1371812197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:12:03 localhost nova_compute[281854]: 2025-12-02 10:12:03.217 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:12:03 localhost nova_compute[281854]: 2025-12-02 10:12:03.224 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:12:03 localhost nova_compute[281854]: 2025-12-02 10:12:03.243 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:12:03 localhost nova_compute[281854]: 2025-12-02 10:12:03.245 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:12:03 localhost nova_compute[281854]: 2025-12-02 10:12:03.246 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:12:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:03.319 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:01Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bcc45c26-5f02-46a1-869b-9af695c3ec53, ip_allocation=immediate, mac_address=fa:16:3e:5f:8e:4d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:46Z, description=, dns_domain=, id=e6b81515-1a91-47bb-810b-f820ca0caeff, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--286242228, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25349, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2719, status=ACTIVE, subnets=['0c28764e-253b-48a2-be4d-b3892f027641'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:53Z, vlan_transparent=None, network_id=e6b81515-1a91-47bb-810b-f820ca0caeff, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2741, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:01Z on network e6b81515-1a91-47bb-810b-f820ca0caeff#033[00m Dec 2 05:12:03 localhost systemd[1]: tmp-crun.ke4vUd.mount: Deactivated successfully. Dec 2 05:12:03 localhost dnsmasq[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/addn_hosts - 1 addresses Dec 2 05:12:03 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/host Dec 2 05:12:03 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/opts Dec 2 05:12:03 localhost podman[328992]: 2025-12-02 10:12:03.554674292 +0000 UTC m=+0.076773778 container kill b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:03.803 263406 INFO neutron.agent.dhcp.agent [None req-97a713e9-cb8b-40ff-930e-41b9f4bb6013 - - - - - -] DHCP configuration for ports {'bcc45c26-5f02-46a1-869b-9af695c3ec53'} is completed#033[00m Dec 2 05:12:04 localhost openstack_network_exporter[242845]: ERROR 10:12:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:04 localhost openstack_network_exporter[242845]: ERROR 10:12:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:04 localhost openstack_network_exporter[242845]: ERROR 10:12:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:12:04 localhost openstack_network_exporter[242845]: ERROR 10:12:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:12:04 localhost openstack_network_exporter[242845]: Dec 2 05:12:04 localhost openstack_network_exporter[242845]: ERROR 10:12:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:12:04 localhost openstack_network_exporter[242845]: Dec 2 05:12:04 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:04 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:05 localhost dnsmasq[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/addn_hosts - 0 addresses Dec 2 05:12:05 localhost podman[329031]: 2025-12-02 10:12:05.018869701 +0000 UTC m=+0.058097009 container kill b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:12:05 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/host Dec 2 05:12:05 localhost dnsmasq-dhcp[328891]: read /var/lib/neutron/dhcp/e6b81515-1a91-47bb-810b-f820ca0caeff/opts Dec 2 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:12:05 localhost podman[329044]: 2025-12-02 10:12:05.143058111 +0000 UTC m=+0.091526461 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:12:05 localhost podman[329044]: 2025-12-02 10:12:05.159179641 +0000 UTC m=+0.107647991 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 2 05:12:05 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00513|binding|INFO|Releasing lport 806e97f6-df70-4fff-953f-74b4c641ea7b from this chassis (sb_readonly=0) Dec 2 05:12:05 localhost kernel: device tap806e97f6-df left promiscuous mode Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.241 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00514|binding|INFO|Setting lport 806e97f6-df70-4fff-953f-74b4c641ea7b down in Southbound Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.253 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e6b81515-1a91-47bb-810b-f820ca0caeff', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b81515-1a91-47bb-810b-f820ca0caeff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f863ae6-a602-4080-9383-f6b709828279, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=806e97f6-df70-4fff-953f-74b4c641ea7b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.255 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 806e97f6-df70-4fff-953f-74b4c641ea7b in datapath e6b81515-1a91-47bb-810b-f820ca0caeff unbound from our chassis#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.258 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6b81515-1a91-47bb-810b-f820ca0caeff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.259 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5454b5cd-66a3-447b-b034-af958d37bddd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.267 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:12:05 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4271932895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:12:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:05.702 263406 INFO neutron.agent.linux.ip_lib [None req-def73ee1-72be-4d46-8540-e13b5f9f097e - - - - - -] Device tapd4923978-01 cannot be used as it has no MAC address#033[00m Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.757 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost kernel: device tapd4923978-01 entered promiscuous mode Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00515|binding|INFO|Claiming lport d4923978-01a3-404a-b8a3-c2641f58992d for this chassis. Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00516|binding|INFO|d4923978-01a3-404a-b8a3-c2641f58992d: Claiming unknown Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.765 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost NetworkManager[5965]: [1764670325.7661] manager: (tapd4923978-01): new Generic device (/org/freedesktop/NetworkManager/Devices/81) Dec 2 05:12:05 localhost systemd-udevd[329081]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.774 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c6d140fa-e90a-46db-a2ed-8d904415f1fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d140fa-e90a-46db-a2ed-8d904415f1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0ae5db5-d26f-4f88-83bb-815d569d8232, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d4923978-01a3-404a-b8a3-c2641f58992d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.776 160221 INFO neutron.agent.ovn.metadata.agent [-] Port d4923978-01a3-404a-b8a3-c2641f58992d in datapath c6d140fa-e90a-46db-a2ed-8d904415f1fc bound to our chassis#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.777 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6d140fa-e90a-46db-a2ed-8d904415f1fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:05.778 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[2673e15d-5bb3-42d3-9ad4-71b82cb8d320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00517|binding|INFO|Setting lport d4923978-01a3-404a-b8a3-c2641f58992d ovn-installed in OVS Dec 2 05:12:05 localhost ovn_controller[154505]: 2025-12-02T10:12:05Z|00518|binding|INFO|Setting lport d4923978-01a3-404a-b8a3-c2641f58992d up in Southbound Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.803 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost journal[230136]: ethtool ioctl error on tapd4923978-01: No such device Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.842 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:05 localhost nova_compute[281854]: 2025-12-02 10:12:05.870 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:06 localhost podman[240799]: time="2025-12-02T10:12:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:12:06 localhost nova_compute[281854]: 2025-12-02 10:12:06.171 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:06 localhost podman[240799]: @ - - [02/Dec/2025:10:12:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:12:06 localhost podman[240799]: @ - - [02/Dec/2025:10:12:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19255 "" "Go-http-client/1.1" Dec 2 05:12:06 localhost nova_compute[281854]: 2025-12-02 10:12:06.247 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost nova_compute[281854]: 2025-12-02 10:12:06.248 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e185 e185: 6 total, 6 up, 6 in Dec 2 05:12:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e186 e186: 6 total, 6 up, 6 in Dec 2 05:12:06 localhost podman[329153]: Dec 2 05:12:06 localhost podman[329153]: 2025-12-02 10:12:06.816890606 +0000 UTC m=+0.098446454 container create aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:06 localhost nova_compute[281854]: 2025-12-02 10:12:06.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:06 localhost systemd[1]: Started libpod-conmon-aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b.scope. Dec 2 05:12:06 localhost podman[329153]: 2025-12-02 10:12:06.765735533 +0000 UTC m=+0.047291381 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:06 localhost systemd[1]: Started libcrun container. Dec 2 05:12:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc8399162bfbb6c24544e9605c7d0a83a32a4fccef1f87ba5e365b5de598b76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:06 localhost podman[329153]: 2025-12-02 10:12:06.892132051 +0000 UTC m=+0.173687899 container init aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:12:06 localhost podman[329153]: 2025-12-02 10:12:06.899674543 +0000 UTC m=+0.181230391 container start aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:06 localhost dnsmasq[329172]: started, version 2.85 cachesize 150 Dec 2 05:12:06 localhost dnsmasq[329172]: DNS service limited to local subnets Dec 2 05:12:06 localhost dnsmasq[329172]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:06 localhost dnsmasq[329172]: warning: no upstream servers configured Dec 2 05:12:06 localhost dnsmasq-dhcp[329172]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:12:06 localhost dnsmasq[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/addn_hosts - 0 addresses Dec 2 05:12:06 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/host Dec 2 05:12:06 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/opts Dec 2 05:12:06 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:06.957 263406 INFO neutron.agent.dhcp.agent [None req-def73ee1-72be-4d46-8540-e13b5f9f097e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:05Z, description=, device_id=5a90933a-19ff-489b-be1e-b7113aa8ee2e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=79296e56-a39b-4325-9fe9-d6f456dbb5b5, ip_allocation=immediate, mac_address=fa:16:3e:d1:81:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:03Z, description=, dns_domain=, id=c6d140fa-e90a-46db-a2ed-8d904415f1fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1076909150, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3091, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2747, status=ACTIVE, subnets=['8a2af494-3790-4481-98e0-c7ff125b2591'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:04Z, vlan_transparent=None, network_id=c6d140fa-e90a-46db-a2ed-8d904415f1fc, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2764, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:05Z on network c6d140fa-e90a-46db-a2ed-8d904415f1fc#033[00m Dec 2 05:12:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:07.109 263406 INFO neutron.agent.dhcp.agent [None req-e95f08ea-8d0f-4693-a61b-5bb1bd69195c - - - - - -] DHCP configuration for ports {'a503dddc-bc64-44f7-af1e-2ea5e1045d5a'} is completed#033[00m Dec 2 05:12:07 localhost dnsmasq[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/addn_hosts - 1 addresses Dec 2 05:12:07 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/host Dec 2 05:12:07 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/opts Dec 2 05:12:07 localhost podman[329190]: 2025-12-02 10:12:07.158634883 +0000 UTC m=+0.070306804 container kill aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:12:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:07.360 263406 INFO neutron.agent.dhcp.agent [None req-def73ee1-72be-4d46-8540-e13b5f9f097e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:05Z, description=, device_id=5a90933a-19ff-489b-be1e-b7113aa8ee2e, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=79296e56-a39b-4325-9fe9-d6f456dbb5b5, ip_allocation=immediate, mac_address=fa:16:3e:d1:81:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:03Z, description=, dns_domain=, id=c6d140fa-e90a-46db-a2ed-8d904415f1fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1076909150, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3091, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2747, status=ACTIVE, subnets=['8a2af494-3790-4481-98e0-c7ff125b2591'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:04Z, vlan_transparent=None, network_id=c6d140fa-e90a-46db-a2ed-8d904415f1fc, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2764, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:05Z on network c6d140fa-e90a-46db-a2ed-8d904415f1fc#033[00m Dec 2 05:12:07 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 2 05:12:07 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:07 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 2 05:12:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:07.486 263406 INFO neutron.agent.dhcp.agent [None req-869e7b65-daa9-41b1-ad3c-bc573c3a807c - - - - - -] DHCP configuration for ports {'79296e56-a39b-4325-9fe9-d6f456dbb5b5'} is completed#033[00m Dec 2 05:12:07 localhost dnsmasq[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/addn_hosts - 1 addresses Dec 2 05:12:07 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/host Dec 2 05:12:07 localhost podman[329228]: 2025-12-02 10:12:07.596587405 +0000 UTC m=+0.078094762 container kill aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:12:07 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/opts Dec 2 05:12:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e187 e187: 6 total, 6 up, 6 in Dec 2 05:12:07 localhost nova_compute[281854]: 2025-12-02 10:12:07.824 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:07.928 263406 INFO neutron.agent.dhcp.agent [None req-cac44c48-0caf-4c93-82a6-55b58b4e83a1 - - - - - -] DHCP configuration for ports {'79296e56-a39b-4325-9fe9-d6f456dbb5b5'} is completed#033[00m Dec 2 05:12:08 localhost dnsmasq[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/addn_hosts - 0 addresses Dec 2 05:12:08 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/host Dec 2 05:12:08 localhost podman[329265]: 2025-12-02 10:12:08.431724321 +0000 UTC m=+0.058387527 container kill aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:12:08 localhost dnsmasq-dhcp[329172]: read /var/lib/neutron/dhcp/c6d140fa-e90a-46db-a2ed-8d904415f1fc/opts Dec 2 05:12:08 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 2 05:12:08 localhost ovn_controller[154505]: 2025-12-02T10:12:08Z|00519|binding|INFO|Releasing lport d4923978-01a3-404a-b8a3-c2641f58992d from this chassis (sb_readonly=0) Dec 2 05:12:08 localhost kernel: device tapd4923978-01 left promiscuous mode Dec 2 05:12:08 localhost ovn_controller[154505]: 2025-12-02T10:12:08Z|00520|binding|INFO|Setting lport d4923978-01a3-404a-b8a3-c2641f58992d down in Southbound Dec 2 05:12:08 localhost nova_compute[281854]: 2025-12-02 10:12:08.672 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:08.682 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c6d140fa-e90a-46db-a2ed-8d904415f1fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d140fa-e90a-46db-a2ed-8d904415f1fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0ae5db5-d26f-4f88-83bb-815d569d8232, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d4923978-01a3-404a-b8a3-c2641f58992d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:08.685 160221 INFO neutron.agent.ovn.metadata.agent [-] Port d4923978-01a3-404a-b8a3-c2641f58992d in datapath c6d140fa-e90a-46db-a2ed-8d904415f1fc unbound from our chassis#033[00m Dec 2 05:12:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:08.687 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c6d140fa-e90a-46db-a2ed-8d904415f1fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:08 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:08.688 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1aa5d3-8c1e-46c3-8644-1111ffe3cdd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:08 localhost nova_compute[281854]: 2025-12-02 10:12:08.694 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:08 localhost nova_compute[281854]: 2025-12-02 10:12:08.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:12:09 localhost systemd[1]: tmp-crun.EQvAtW.mount: Deactivated successfully. Dec 2 05:12:09 localhost podman[329287]: 2025-12-02 10:12:09.443112544 +0000 UTC m=+0.080542268 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:12:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e188 e188: 6 total, 6 up, 6 in Dec 2 05:12:09 localhost podman[329287]: 2025-12-02 10:12:09.484181767 +0000 UTC m=+0.121611471 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:12:09 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:12:09 localhost podman[329288]: 2025-12-02 10:12:09.564217421 +0000 UTC m=+0.196528319 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:12:09 localhost podman[329288]: 2025-12-02 10:12:09.604264938 +0000 UTC m=+0.236575846 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:09 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:12:09 localhost dnsmasq[329172]: exiting on receipt of SIGTERM Dec 2 05:12:09 localhost podman[329352]: 2025-12-02 10:12:09.697433331 +0000 UTC m=+0.033877384 container kill aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:09 localhost systemd[1]: libpod-aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b.scope: Deactivated successfully. Dec 2 05:12:09 localhost podman[329366]: 2025-12-02 10:12:09.737992482 +0000 UTC m=+0.032316933 container died aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:12:09 localhost podman[329366]: 2025-12-02 10:12:09.760278736 +0000 UTC m=+0.054603167 container cleanup aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:12:09 localhost systemd[1]: libpod-conmon-aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b.scope: Deactivated successfully. Dec 2 05:12:09 localhost podman[329368]: 2025-12-02 10:12:09.775284585 +0000 UTC m=+0.064297844 container remove aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c6d140fa-e90a-46db-a2ed-8d904415f1fc, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:12:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:09.975 263406 INFO neutron.agent.dhcp.agent [None req-e455c356-8328-4ce1-a2e9-c46b0fd1d465 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:09.975 263406 INFO neutron.agent.dhcp.agent [None req-e455c356-8328-4ce1-a2e9-c46b0fd1d465 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:10.015 263406 INFO neutron.agent.linux.ip_lib [None req-b53b37c2-a206-4ab9-b0fc-93c49b697de4 - - - - - -] Device tap8b3a7663-ad cannot be used as it has no MAC address#033[00m Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.039 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:10 localhost kernel: device tap8b3a7663-ad entered promiscuous mode Dec 2 05:12:10 localhost NetworkManager[5965]: [1764670330.0478] manager: (tap8b3a7663-ad): new Generic device (/org/freedesktop/NetworkManager/Devices/82) Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.048 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:10 localhost systemd-udevd[329407]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:10 localhost ovn_controller[154505]: 2025-12-02T10:12:10Z|00521|binding|INFO|Claiming lport 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 for this chassis. Dec 2 05:12:10 localhost ovn_controller[154505]: 2025-12-02T10:12:10Z|00522|binding|INFO|8b3a7663-ad72-4099-a4de-0fa85d29cfd8: Claiming unknown Dec 2 05:12:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:10.060 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-9a6d986e-0caf-4eff-b1d3-a10e7add5365', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a6d986e-0caf-4eff-b1d3-a10e7add5365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fea98301-f19b-4654-8756-4655244bd809, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8b3a7663-ad72-4099-a4de-0fa85d29cfd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:10.065 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 in datapath 9a6d986e-0caf-4eff-b1d3-a10e7add5365 bound to our chassis#033[00m Dec 2 05:12:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:10.067 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a6d986e-0caf-4eff-b1d3-a10e7add5365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:10 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:10.067 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a8edd05d-079f-4360-8501-38ffa850b4b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:10 localhost ovn_controller[154505]: 2025-12-02T10:12:10Z|00523|binding|INFO|Setting lport 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 ovn-installed in OVS Dec 2 05:12:10 localhost ovn_controller[154505]: 2025-12-02T10:12:10Z|00524|binding|INFO|Setting lport 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 up in Southbound Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.095 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.126 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.158 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:10 localhost systemd[1]: tmp-crun.sdFSmW.mount: Deactivated successfully. Dec 2 05:12:10 localhost systemd[1]: var-lib-containers-storage-overlay-abc8399162bfbb6c24544e9605c7d0a83a32a4fccef1f87ba5e365b5de598b76-merged.mount: Deactivated successfully. Dec 2 05:12:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aaae0f07d69c92194e9087fcffa523dd13f97696a1a42cd16a31621218c7972b-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:10 localhost systemd[1]: run-netns-qdhcp\x2dc6d140fa\x2de90a\x2d46db\x2da2ed\x2d8d904415f1fc.mount: Deactivated successfully. Dec 2 05:12:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:10.425 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:10 localhost nova_compute[281854]: 2025-12-02 10:12:10.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:11 localhost podman[329462]: Dec 2 05:12:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/657463899' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/657463899' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:11 localhost podman[329462]: 2025-12-02 10:12:11.067697347 +0000 UTC m=+0.097012656 container create e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:12:11 localhost systemd[1]: Started libpod-conmon-e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02.scope. Dec 2 05:12:11 localhost podman[329462]: 2025-12-02 10:12:11.020114769 +0000 UTC m=+0.049430108 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:11 localhost systemd[1]: Started libcrun container. Dec 2 05:12:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b61af61b108caaf0c1a1d229e8160b6e6a2c2166342cae9dc858179db74819f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:11 localhost podman[329462]: 2025-12-02 10:12:11.148759267 +0000 UTC m=+0.178074576 container init e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:11 localhost podman[329462]: 2025-12-02 10:12:11.158451085 +0000 UTC m=+0.187766394 container start e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:11 localhost dnsmasq[329480]: started, version 2.85 cachesize 150 Dec 2 05:12:11 localhost dnsmasq[329480]: DNS service limited to local subnets Dec 2 05:12:11 localhost dnsmasq[329480]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:11 localhost dnsmasq[329480]: warning: no upstream servers configured Dec 2 05:12:11 localhost dnsmasq-dhcp[329480]: DHCPv6, static leases only on 2001:db8:0:ffff::, lease time 1d Dec 2 05:12:11 localhost dnsmasq[329480]: read /var/lib/neutron/dhcp/9a6d986e-0caf-4eff-b1d3-a10e7add5365/addn_hosts - 0 addresses Dec 2 05:12:11 localhost dnsmasq-dhcp[329480]: read /var/lib/neutron/dhcp/9a6d986e-0caf-4eff-b1d3-a10e7add5365/host Dec 2 05:12:11 localhost dnsmasq-dhcp[329480]: read /var/lib/neutron/dhcp/9a6d986e-0caf-4eff-b1d3-a10e7add5365/opts Dec 2 05:12:11 localhost nova_compute[281854]: 2025-12-02 10:12:11.175 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:11 localhost ovn_controller[154505]: 2025-12-02T10:12:11Z|00525|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:11 localhost nova_compute[281854]: 2025-12-02 10:12:11.272 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:11 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:11.305 263406 INFO neutron.agent.dhcp.agent [None req-d581afbc-134d-463b-a55f-198042403923 - - - - - -] DHCP configuration for ports {'30df0712-b628-4d3a-8a20-1aa5e2e6b62f'} is completed#033[00m Dec 2 05:12:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e189 e189: 6 total, 6 up, 6 in Dec 2 05:12:12 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:12 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/325227520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:12 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:12 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/325227520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:13.250 2 INFO neutron.agent.securitygroups_rpc [None req-1b9cf27d-4f71-42a8-aff0-a386ad5e469f 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e190 e190: 6 total, 6 up, 6 in Dec 2 05:12:14 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:14 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:14 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.108 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.109 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.114 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0941a6d1-f5f9-42be-888c-2cf3b53f844a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.110142', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60cd8668-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': 'c2edacff1acd44b2ca6bddbad40764893443ec374760929b917e9322ecd65951'}]}, 'timestamp': '2025-12-02 10:12:16.115206', '_unique_id': 'fa86c36372904335a22520bf70c0b7bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.117 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.118 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.118 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0dbaf221-4174-4b83-ad58-4afe3c794183', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.118733', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60ce2bfe-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '77927cfbf380b4ae973b7a2700565f72d94640245765043f14c155bfa48e23d4'}]}, 'timestamp': '2025-12-02 10:12:16.119473', '_unique_id': 'c570d5b5c57e47488c87ddd444eb1dce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.120 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.122 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.122 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.136 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.137 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '58c4df1e-edc3-4e58-8aa9-dc6e855fcac6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.123084', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60d0e952-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': '186bdc927890c25abd74209eea8d41d08a90279c267baf4f36d0a0472c2d5ade'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.123084', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60d0fb36-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': '7073cf6d260ce8a1a0bf8de9203525a2bb98a03afcf3bded3e4e6f8c2766e490'}]}, 'timestamp': '2025-12-02 10:12:16.137804', '_unique_id': 'c8137cb34ba947828746423a8cee67c1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.140 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.140 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.140 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '453a95b6-d19b-4e4a-b976-6409d51cf3d3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.140486', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60d17a02-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': 'c270180542612c6bb8c80c7d992580af41465583a4e2bc185be0e10230b95419'}]}, 'timestamp': '2025-12-02 10:12:16.140990', '_unique_id': '22d2408dd5b94bd785acc0345261b5d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.142 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.143 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.143 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd588d663-1336-4266-8967-6f77e165184a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.143449', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60d1ed84-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '45481602be5629dc4cc38cd044ed2dc14ba97f19cc1d1b0b1a4fb70371fe9de5'}]}, 'timestamp': '2025-12-02 10:12:16.143940', '_unique_id': '77786c55cf264802b62d5aea096c993b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.144 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.145 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.167 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 18990000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '746a8bd4-e123-4121-8381-246db5d7797d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18990000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:12:16.146021', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '60d598bc-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.386411188, 'message_signature': '1871605b5f87c260c49847c45a13eb95bab7c138bacec43149ab244e7763f36e'}]}, 'timestamp': '2025-12-02 10:12:16.167987', '_unique_id': 'ca9c5d01b2884878a7e8d08f34fdd4c2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.170 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:12:16 localhost nova_compute[281854]: 2025-12-02 10:12:16.179 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.210 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6ddcf75-0026-43a5-8a08-3e9d02ec981d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.170345', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60dc0dfa-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '1b13798f951ef363fb6c88aaf80d9efc4f3d7a11998d026ed8b81b688073c0c6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.170345', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60dc24e8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': 'e6f414dc2c5ad59451a9e13d41ba34467b08acbbc64a83f51f7c773544c4b809'}]}, 'timestamp': '2025-12-02 10:12:16.210886', '_unique_id': 'ca8d39bad68c48f3ae423f3c4f619fb7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.212 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.213 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.213 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c06cd3c8-63d6-4de4-b944-9c8f30986c90', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.213934', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60dcaefe-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '81c44ac76ccc95e7f5b969ee78b27e92667dab38f421cf5fcf254e3e225900d7'}]}, 'timestamp': '2025-12-02 10:12:16.214496', '_unique_id': '4066944921df48e3a06f20eea5649fb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.215 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.216 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e385f3b2-ceaa-47e5-93a8-131c3cc109d4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.216989', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60dd24ce-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': '3cb826deb531f3d6256ed0b253053890909151adacb17c37dcbbe0f50c62a17d'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.216989', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60dd363a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': 'db5a3e0fc720e99e69d867d7ce3ec8b06948d0b8a6aa7316070b9d1a69cb8b0c'}]}, 'timestamp': '2025-12-02 10:12:16.217863', '_unique_id': '3512a0e634cc47d2b9566181bf0de4a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.218 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d13f493-b0e0-4e74-844a-288122e3746b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.220194', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60dda200-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': '8e478316ff31cc06294787774acc0dfecaed14691adbb1b4784e4851a4c6018c'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.220194', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60ddb38a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.342187659, 'message_signature': 'c0712134f0a06f5d812d202ed2c3f78a04505bd9c0124419a32c9b4c763c07f8'}]}, 'timestamp': '2025-12-02 10:12:16.221068', '_unique_id': '04509c7f5f28496c97235ac782db3e1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.221 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4eced968-48b4-400b-b047-878ded048894', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.223207', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60de1794-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '717c9823efb39a353aad090eb26385fabc8cf6d436a6b4a5819b7cd41c64fd69'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.223207', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60de2e5a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '5b79e555648d6fdb98eec8082cf98024ce4da9f168b2d54cfa36ddf56caf87c5'}]}, 'timestamp': '2025-12-02 10:12:16.224285', '_unique_id': '1d8109f8a816493c9912a654250a72ee'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.227 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.227 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.228 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69b4d8b0-a90e-48c0-ad5e-8d4c8fa26a68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.227730', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60ded6fc-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '238f72d56f148f4c8e39cd774177657555ffe5f6aee63b744bab7fe96bba2f08'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.227730', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60deed5e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': 'df0c489eadee7207257945db50e27ae1ae1eb28b6315e92eb4bdf0c2334e9123'}]}, 'timestamp': '2025-12-02 10:12:16.229119', '_unique_id': 'e3528992f84c4844a5fcfbac54c70019'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.230 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.231 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.232 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '349f9f33-7c30-43e8-98fc-355f67090deb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.232153', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60df76f2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '16de74b41a73fa21afa56567a6f6ed65e09c01f7bb418e1b04a4867361c95d51'}]}, 'timestamp': '2025-12-02 10:12:16.232688', '_unique_id': '05c11c66f95e4bd4bbb90f178c13603d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.233 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.234 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.234 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.235 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd04bba80-c86b-43da-93f2-0f2ca3b1700c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.234837', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60dfddc2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': 'f59390e942b8890904f30956a592dbcdb6d6394fc7811fa81892676042ed7532'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.234837', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60dff01e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '8dd092f8624b0e15e0dcc1267e5773ffa97347c44a0ae07b6e781a2f8796ba25'}]}, 'timestamp': '2025-12-02 10:12:16.235758', '_unique_id': 'f4fce673db1a4beda0c5de73b2afa31d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.236 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.238 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.238 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8db02dd1-1b8b-4275-87a7-4e2d59d5fee1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.238186', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60e060ee-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '78661369bf9d9112e39e56b8b2f712e0731a29f6c81fd9ab394911bbbafdf347'}]}, 'timestamp': '2025-12-02 10:12:16.238674', '_unique_id': 'b465affe937246438ce0dc942e87fc1b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.239 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.240 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.240 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a83b81d1-2114-414c-9258-83ac4262b0f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.240830', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60e0c7f0-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '4610c6914605ccb3069d543c008894a75e904dc933568e55ebb4a1ec29cc0f0e'}]}, 'timestamp': '2025-12-02 10:12:16.241273', '_unique_id': 'b79e8cc118554c23bc002b88913624bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.242 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.243 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.243 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ab24288-746a-4d2e-92ca-2774e89f30e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.243909', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60e1402c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '7167b08072fd2628085b848045dc8be46f394de84df0edfbb81350ecd6e23940'}]}, 'timestamp': '2025-12-02 10:12:16.244354', '_unique_id': '2be22dee6b2c44ea8d994b1a27ce4205'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.245 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.246 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.246 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c73fa7fe-2156-4b53-ba1a-f6dd0d7bdde4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:12:16.246438', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '60e1a4cc-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.386411188, 'message_signature': '78a79cc11c10ca07bfbaad1d96528bb134852c51dcc5d8d97df0aa08c5b06da2'}]}, 'timestamp': '2025-12-02 10:12:16.246916', '_unique_id': '72f4366c282d42eb9c6a30500c188cc9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.247 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.248 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.248 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.248 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.248 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.248 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e77c4b8-0f97-4248-8e0d-5fd900428db9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.248715', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60e1f8b4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '42309a902940abc278e873eaac8b808198cd10b7ddef8d1ed5ab00a76984efa5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.248715', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60e2026e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '063f31faf2fbe70b72e1e652c17fc80139525d626dc45cade51a37cd09c56313'}]}, 'timestamp': '2025-12-02 10:12:16.249233', '_unique_id': '521d6b653c1c4ee4bc8778d751d74243'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.249 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.250 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.250 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c3fff19e-81e4-48cd-9d0d-947f310edc2e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:12:16.250871', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '60e25354-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.329248945, 'message_signature': '4fc0fbef4e4c05121b7af963844de40972d5b9f2ed06bd547ced3c0c69a34c95'}]}, 'timestamp': '2025-12-02 10:12:16.251318', '_unique_id': '464ec066547846e59b210bc239909dcb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.251 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.252 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.252 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '980341e2-c538-43ff-b2d9-f51bf2c8c9e9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:12:16.252729', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '60e2956c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': 'fe044fe531084028a7c4ca7f803103ce1b46544c9b53320600bb964fae332546'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:12:16.252729', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '60e2a408-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12498.389436619, 'message_signature': '2cea3e8ae5c55ba5b707ddac4072c544f9525a9b5a5c347e37a0cfc7c387c2b2'}]}, 'timestamp': '2025-12-02 10:12:16.253372', '_unique_id': '4e7e3ef72b8c4de3a43c7b60459abf0a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:12:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:12:16.253 12 ERROR oslo_messaging.notify.messaging Dec 2 05:12:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e191 e191: 6 total, 6 up, 6 in Dec 2 05:12:16 localhost ovn_controller[154505]: 2025-12-02T10:12:16Z|00526|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:16 localhost nova_compute[281854]: 2025-12-02 10:12:16.651 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:16.909 263406 INFO neutron.agent.linux.ip_lib [None req-f0cf0a3b-6b3e-4618-96f5-732e76579a9d - - - - - -] Device tap3fc6bc7c-d6 cannot be used as it has no MAC address#033[00m Dec 2 05:12:16 localhost nova_compute[281854]: 2025-12-02 10:12:16.962 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:16 localhost kernel: device tap3fc6bc7c-d6 entered promiscuous mode Dec 2 05:12:16 localhost NetworkManager[5965]: [1764670336.9688] manager: (tap3fc6bc7c-d6): new Generic device (/org/freedesktop/NetworkManager/Devices/83) Dec 2 05:12:16 localhost ovn_controller[154505]: 2025-12-02T10:12:16Z|00527|binding|INFO|Claiming lport 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 for this chassis. Dec 2 05:12:16 localhost ovn_controller[154505]: 2025-12-02T10:12:16Z|00528|binding|INFO|3fc6bc7c-d600-4263-b6b5-a826d1a899f3: Claiming unknown Dec 2 05:12:16 localhost nova_compute[281854]: 2025-12-02 10:12:16.970 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:16 localhost systemd-udevd[329491]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:16.981 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:e21f/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7499f69a-21e4-43dd-8d90-9037f211beae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7499f69a-21e4-43dd-8d90-9037f211beae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de229515-8d8f-41bc-bfc9-16d179cdd33e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3fc6bc7c-d600-4263-b6b5-a826d1a899f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:16.983 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 in datapath 7499f69a-21e4-43dd-8d90-9037f211beae bound to our chassis#033[00m Dec 2 05:12:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:16.987 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port eda5af0c-4f23-4256-97dd-161fdf57e787 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:12:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:16.988 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7499f69a-21e4-43dd-8d90-9037f211beae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:16 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:16.989 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[51df919c-330c-40d5-bc30-80dfbd74d4ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:17 localhost ovn_controller[154505]: 2025-12-02T10:12:17Z|00529|binding|INFO|Setting lport 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 ovn-installed in OVS Dec 2 05:12:17 localhost ovn_controller[154505]: 2025-12-02T10:12:17Z|00530|binding|INFO|Setting lport 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 up in Southbound Dec 2 05:12:17 localhost nova_compute[281854]: 2025-12-02 10:12:17.015 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:17 localhost nova_compute[281854]: 2025-12-02 10:12:17.048 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:17 localhost nova_compute[281854]: 2025-12-02 10:12:17.075 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:17 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:17.471 2 INFO neutron.agent.securitygroups_rpc [None req-12cf221e-0940-4511-92ba-1f5763df32bf 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:17 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:17 localhost podman[329546]: Dec 2 05:12:17 localhost podman[329546]: 2025-12-02 10:12:17.985919261 +0000 UTC m=+0.098176417 container create 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:12:18 localhost systemd[1]: Started libpod-conmon-98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f.scope. Dec 2 05:12:18 localhost podman[329546]: 2025-12-02 10:12:17.939968537 +0000 UTC m=+0.052225723 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:18 localhost systemd[1]: Started libcrun container. Dec 2 05:12:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d875eb3c1b3c9e2ceb62f7bab21f952c4354d929abfca283190eb2f995b7f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:18 localhost podman[329546]: 2025-12-02 10:12:18.072466757 +0000 UTC m=+0.184723913 container init 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:12:18 localhost podman[329546]: 2025-12-02 10:12:18.087695064 +0000 UTC m=+0.199952210 container start 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:18 localhost dnsmasq[329564]: started, version 2.85 cachesize 150 Dec 2 05:12:18 localhost dnsmasq[329564]: DNS service limited to local subnets Dec 2 05:12:18 localhost dnsmasq[329564]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:18 localhost dnsmasq[329564]: warning: no upstream servers configured Dec 2 05:12:18 localhost dnsmasq-dhcp[329564]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:12:18 localhost dnsmasq[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/addn_hosts - 0 addresses Dec 2 05:12:18 localhost dnsmasq-dhcp[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/host Dec 2 05:12:18 localhost dnsmasq-dhcp[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/opts Dec 2 05:12:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:18.164 263406 INFO neutron.agent.dhcp.agent [None req-f0cf0a3b-6b3e-4618-96f5-732e76579a9d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:16Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6275e0d7-4643-4f6a-be6e-c6bc38b44570, ip_allocation=immediate, mac_address=fa:16:3e:ac:4c:e1, name=tempest-NetworksIpV6TestAttrs-1409601479, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:14Z, description=, dns_domain=, id=7499f69a-21e4-43dd-8d90-9037f211beae, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksIpV6TestAttrs-test-network-943954912, port_security_enabled=True, project_id=096ffa0a51b143039159efc232ec547a, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60372, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2798, status=ACTIVE, subnets=['401ed2f0-cf82-4106-9eea-aee47f00d9f7'], tags=[], tenant_id=096ffa0a51b143039159efc232ec547a, updated_at=2025-12-02T10:12:15Z, vlan_transparent=None, network_id=7499f69a-21e4-43dd-8d90-9037f211beae, port_security_enabled=True, project_id=096ffa0a51b143039159efc232ec547a, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff'], standard_attr_id=2808, status=DOWN, tags=[], tenant_id=096ffa0a51b143039159efc232ec547a, updated_at=2025-12-02T10:12:16Z on network 7499f69a-21e4-43dd-8d90-9037f211beae#033[00m Dec 2 05:12:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:18.377 263406 INFO neutron.agent.dhcp.agent [None req-a5c9f2bc-2156-4555-a4ca-f799d6a4f989 - - - - - -] DHCP configuration for ports {'e6bafd7e-a7b7-4380-a96b-2e66c659fd1f'} is completed#033[00m Dec 2 05:12:18 localhost dnsmasq[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/addn_hosts - 1 addresses Dec 2 05:12:18 localhost dnsmasq-dhcp[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/host Dec 2 05:12:18 localhost podman[329583]: 2025-12-02 10:12:18.388241383 +0000 UTC m=+0.069403251 container kill 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:12:18 localhost dnsmasq-dhcp[329564]: read /var/lib/neutron/dhcp/7499f69a-21e4-43dd-8d90-9037f211beae/opts Dec 2 05:12:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:18.733 263406 INFO neutron.agent.dhcp.agent [None req-ff0439dd-f66c-46d5-8d8d-6da67932b1ad - - - - - -] DHCP configuration for ports {'6275e0d7-4643-4f6a-be6e-c6bc38b44570'} is completed#033[00m Dec 2 05:12:18 localhost dnsmasq[328891]: exiting on receipt of SIGTERM Dec 2 05:12:18 localhost podman[329619]: 2025-12-02 10:12:18.814570494 +0000 UTC m=+0.062068565 container kill b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:18 localhost systemd[1]: libpod-b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a.scope: Deactivated successfully. Dec 2 05:12:18 localhost podman[329639]: 2025-12-02 10:12:18.887233301 +0000 UTC m=+0.052480240 container died b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:12:18 localhost podman[329639]: 2025-12-02 10:12:18.93112034 +0000 UTC m=+0.096367249 container remove b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b81515-1a91-47bb-810b-f820ca0caeff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:18 localhost systemd[1]: libpod-conmon-b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a.scope: Deactivated successfully. Dec 2 05:12:18 localhost systemd[1]: tmp-crun.v3Wtgc.mount: Deactivated successfully. Dec 2 05:12:18 localhost systemd[1]: var-lib-containers-storage-overlay-bbf59d4ae25454728b26aa2ee5b303a57d6e9d975e82bd83d233416387dbda34-merged.mount: Deactivated successfully. Dec 2 05:12:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1965ecd259394969d6902012099f161effc69497021569f8f7dfe0b65565b3a-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:19 localhost dnsmasq[329564]: exiting on receipt of SIGTERM Dec 2 05:12:19 localhost podman[329672]: 2025-12-02 10:12:19.122993773 +0000 UTC m=+0.058833469 container kill 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:19 localhost systemd[1]: tmp-crun.NkYxaf.mount: Deactivated successfully. Dec 2 05:12:19 localhost systemd[1]: libpod-98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f.scope: Deactivated successfully. Dec 2 05:12:19 localhost podman[329686]: 2025-12-02 10:12:19.197965211 +0000 UTC m=+0.058641974 container died 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:19.206 263406 INFO neutron.agent.dhcp.agent [None req-af69e25e-f22c-4bd9-aa8d-86621db1e93f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:19 localhost systemd[1]: run-netns-qdhcp\x2de6b81515\x2d1a91\x2d47bb\x2d810b\x2df820ca0caeff.mount: Deactivated successfully. Dec 2 05:12:19 localhost podman[329686]: 2025-12-02 10:12:19.238525682 +0000 UTC m=+0.099202405 container cleanup 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:12:19 localhost systemd[1]: libpod-conmon-98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f.scope: Deactivated successfully. Dec 2 05:12:19 localhost podman[329687]: 2025-12-02 10:12:19.292754277 +0000 UTC m=+0.146318621 container remove 98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7499f69a-21e4-43dd-8d90-9037f211beae, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:12:19 localhost ovn_controller[154505]: 2025-12-02T10:12:19Z|00531|binding|INFO|Releasing lport 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 from this chassis (sb_readonly=0) Dec 2 05:12:19 localhost kernel: device tap3fc6bc7c-d6 left promiscuous mode Dec 2 05:12:19 localhost ovn_controller[154505]: 2025-12-02T10:12:19Z|00532|binding|INFO|Setting lport 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 down in Southbound Dec 2 05:12:19 localhost nova_compute[281854]: 2025-12-02 10:12:19.308 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:19.319 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe29:e21f/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-7499f69a-21e4-43dd-8d90-9037f211beae', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7499f69a-21e4-43dd-8d90-9037f211beae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de229515-8d8f-41bc-bfc9-16d179cdd33e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3fc6bc7c-d600-4263-b6b5-a826d1a899f3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:19.321 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 3fc6bc7c-d600-4263-b6b5-a826d1a899f3 in datapath 7499f69a-21e4-43dd-8d90-9037f211beae unbound from our chassis#033[00m Dec 2 05:12:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:19.323 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7499f69a-21e4-43dd-8d90-9037f211beae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:19 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:19.324 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[80742142-9a56-4112-abac-73f61204db8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:19 localhost nova_compute[281854]: 2025-12-02 10:12:19.334 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:19.650 263406 INFO neutron.agent.dhcp.agent [None req-166fd323-dce2-4942-9c5c-b3ec4409e54b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:19.681 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:19 localhost systemd[1]: tmp-crun.6QgC2a.mount: Deactivated successfully. Dec 2 05:12:19 localhost systemd[1]: var-lib-containers-storage-overlay-d4d875eb3c1b3c9e2ceb62f7bab21f952c4354d929abfca283190eb2f995b7f2-merged.mount: Deactivated successfully. Dec 2 05:12:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98fb077e84c96895dbfdbc31164d597ffc3b5e23ab83bd6453106b41ef744d9f-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:19 localhost systemd[1]: run-netns-qdhcp\x2d7499f69a\x2d21e4\x2d43dd\x2d8d90\x2d9037f211beae.mount: Deactivated successfully. Dec 2 05:12:20 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 2 05:12:20 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:20 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 2 05:12:20 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 2 05:12:21 localhost nova_compute[281854]: 2025-12-02 10:12:21.183 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e192 e192: 6 total, 6 up, 6 in Dec 2 05:12:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:12:22 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:22.942 263406 INFO neutron.agent.linux.ip_lib [None req-e94ea10b-b576-462b-91dc-f7657f041dcc - - - - - -] Device tap7db38a9e-eb cannot be used as it has no MAC address#033[00m Dec 2 05:12:22 localhost podman[329716]: 2025-12-02 10:12:22.967383882 +0000 UTC m=+0.092384313 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:22 localhost nova_compute[281854]: 2025-12-02 10:12:22.972 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:22 localhost kernel: device tap7db38a9e-eb entered promiscuous mode Dec 2 05:12:22 localhost NetworkManager[5965]: [1764670342.9807] manager: (tap7db38a9e-eb): new Generic device (/org/freedesktop/NetworkManager/Devices/84) Dec 2 05:12:22 localhost nova_compute[281854]: 2025-12-02 10:12:22.981 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:22 localhost ovn_controller[154505]: 2025-12-02T10:12:22Z|00533|binding|INFO|Claiming lport 7db38a9e-ebcb-4d9d-918e-68ae809943d3 for this chassis. Dec 2 05:12:22 localhost ovn_controller[154505]: 2025-12-02T10:12:22Z|00534|binding|INFO|7db38a9e-ebcb-4d9d-918e-68ae809943d3: Claiming unknown Dec 2 05:12:22 localhost systemd-udevd[329742]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:22.993 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe21:5f27/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e5941a38-5b52-4ec5-8b26-a548856326e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5941a38-5b52-4ec5-8b26-a548856326e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddfbbc4e-1c2d-489c-bd00-e334cdcd7a08, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7db38a9e-ebcb-4d9d-918e-68ae809943d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:22.995 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 7db38a9e-ebcb-4d9d-918e-68ae809943d3 in datapath e5941a38-5b52-4ec5-8b26-a548856326e1 bound to our chassis#033[00m Dec 2 05:12:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:22.996 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e5941a38-5b52-4ec5-8b26-a548856326e1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:22 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:22.997 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[c785fdbf-ebfb-4c23-861b-abe30e7318c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:23 localhost podman[329716]: 2025-12-02 10:12:23.008075677 +0000 UTC m=+0.133076128 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost nova_compute[281854]: 2025-12-02 10:12:23.019 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:23 localhost ovn_controller[154505]: 2025-12-02T10:12:23Z|00535|binding|INFO|Setting lport 7db38a9e-ebcb-4d9d-918e-68ae809943d3 ovn-installed in OVS Dec 2 05:12:23 localhost ovn_controller[154505]: 2025-12-02T10:12:23Z|00536|binding|INFO|Setting lport 7db38a9e-ebcb-4d9d-918e-68ae809943d3 up in Southbound Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost nova_compute[281854]: 2025-12-02 10:12:23.023 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost journal[230136]: ethtool ioctl error on tap7db38a9e-eb: No such device Dec 2 05:12:23 localhost nova_compute[281854]: 2025-12-02 10:12:23.060 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:23 localhost nova_compute[281854]: 2025-12-02 10:12:23.089 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:23 localhost podman[329812]: Dec 2 05:12:24 localhost podman[329812]: 2025-12-02 10:12:24.007234594 +0000 UTC m=+0.093599846 container create e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:12:24 localhost systemd[1]: Started libpod-conmon-e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746.scope. Dec 2 05:12:24 localhost podman[329812]: 2025-12-02 10:12:23.961815812 +0000 UTC m=+0.048181084 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:24 localhost systemd[1]: Started libcrun container. Dec 2 05:12:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9816e47c75cc8fff026b35d669da0f5a6ffd6039598106d9c9dfaf32ebd644/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:24 localhost podman[329812]: 2025-12-02 10:12:24.09081635 +0000 UTC m=+0.177181602 container init e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:12:24 localhost podman[329812]: 2025-12-02 10:12:24.100023446 +0000 UTC m=+0.186388708 container start e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:12:24 localhost dnsmasq[329831]: started, version 2.85 cachesize 150 Dec 2 05:12:24 localhost dnsmasq[329831]: DNS service limited to local subnets Dec 2 05:12:24 localhost dnsmasq[329831]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:24 localhost dnsmasq[329831]: warning: no upstream servers configured Dec 2 05:12:24 localhost dnsmasq[329831]: read /var/lib/neutron/dhcp/e5941a38-5b52-4ec5-8b26-a548856326e1/addn_hosts - 0 addresses Dec 2 05:12:24 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:24.212 263406 INFO neutron.agent.dhcp.agent [None req-58ec0c9c-c91c-454d-9396-097c0f3b11ce - - - - - -] DHCP configuration for ports {'81275455-dcc9-470b-999b-b60444ee78b6'} is completed#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:24.261 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 79e4d960-f1e9-41e9-8db0-4e28d904bc22 with type ""#033[00m Dec 2 05:12:24 localhost ovn_controller[154505]: 2025-12-02T10:12:24Z|00537|binding|INFO|Removing iface tap7db38a9e-eb ovn-installed in OVS Dec 2 05:12:24 localhost ovn_controller[154505]: 2025-12-02T10:12:24Z|00538|binding|INFO|Removing lport 7db38a9e-ebcb-4d9d-918e-68ae809943d3 ovn-installed in OVS Dec 2 05:12:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:24.264 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe21:5f27/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e5941a38-5b52-4ec5-8b26-a548856326e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5941a38-5b52-4ec5-8b26-a548856326e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddfbbc4e-1c2d-489c-bd00-e334cdcd7a08, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7db38a9e-ebcb-4d9d-918e-68ae809943d3) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:24 localhost nova_compute[281854]: 2025-12-02 10:12:24.263 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:24.267 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 7db38a9e-ebcb-4d9d-918e-68ae809943d3 in datapath e5941a38-5b52-4ec5-8b26-a548856326e1 unbound from our chassis#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:24.268 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e5941a38-5b52-4ec5-8b26-a548856326e1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:24 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:24.269 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8e6a76-243c-444e-b127-d248088f4af0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:24 localhost nova_compute[281854]: 2025-12-02 10:12:24.272 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost dnsmasq[329831]: exiting on receipt of SIGTERM Dec 2 05:12:24 localhost podman[329848]: 2025-12-02 10:12:24.440314014 +0000 UTC m=+0.067813578 container kill e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:12:24 localhost systemd[1]: libpod-e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746.scope: Deactivated successfully. Dec 2 05:12:24 localhost ovn_controller[154505]: 2025-12-02T10:12:24Z|00539|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:24 localhost nova_compute[281854]: 2025-12-02 10:12:24.524 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost podman[329863]: 2025-12-02 10:12:24.546515054 +0000 UTC m=+0.079517259 container died e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:12:24 localhost podman[329863]: 2025-12-02 10:12:24.588073512 +0000 UTC m=+0.121075677 container remove e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e5941a38-5b52-4ec5-8b26-a548856326e1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:24 localhost systemd[1]: libpod-conmon-e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746.scope: Deactivated successfully. Dec 2 05:12:24 localhost nova_compute[281854]: 2025-12-02 10:12:24.599 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost kernel: device tap7db38a9e-eb left promiscuous mode Dec 2 05:12:24 localhost nova_compute[281854]: 2025-12-02 10:12:24.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:24 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:24.628 263406 INFO neutron.agent.dhcp.agent [None req-ceb3f944-56fd-4f94-92b5-ccaeec7af91f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:24 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:24.629 263406 INFO neutron.agent.dhcp.agent [None req-ceb3f944-56fd-4f94-92b5-ccaeec7af91f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:24 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:12:24 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:12:24 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e193 e193: 6 total, 6 up, 6 in Dec 2 05:12:25 localhost systemd[1]: var-lib-containers-storage-overlay-7c9816e47c75cc8fff026b35d669da0f5a6ffd6039598106d9c9dfaf32ebd644-merged.mount: Deactivated successfully. Dec 2 05:12:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e23c89c91c2e65f9f29640f5483b0afd97ccd2241bab5d638b5c8041f0fe2746-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:25 localhost systemd[1]: run-netns-qdhcp\x2de5941a38\x2d5b52\x2d4ec5\x2d8b26\x2da548856326e1.mount: Deactivated successfully. Dec 2 05:12:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e194 e194: 6 total, 6 up, 6 in Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.187 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:12:26 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:26.382 263406 INFO neutron.agent.linux.ip_lib [None req-1a475abf-8b54-4c02-9ae1-9d6f9e605242 - - - - - -] Device tap5cd3a631-9d cannot be used as it has no MAC address#033[00m Dec 2 05:12:26 localhost systemd[1]: tmp-crun.PHX4vd.mount: Deactivated successfully. Dec 2 05:12:26 localhost podman[329892]: 2025-12-02 10:12:26.404885478 +0000 UTC m=+0.096676708 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.417 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost kernel: device tap5cd3a631-9d entered promiscuous mode Dec 2 05:12:26 localhost NetworkManager[5965]: [1764670346.4254] manager: (tap5cd3a631-9d): new Generic device (/org/freedesktop/NetworkManager/Devices/85) Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00540|binding|INFO|Claiming lport 5cd3a631-9d08-4568-9b09-16bfca349289 for this chassis. Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00541|binding|INFO|5cd3a631-9d08-4568-9b09-16bfca349289: Claiming unknown Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.428 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost systemd-udevd[329917]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.436 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-64648a3d-835b-4d29-8cd8-3f34abaacc37', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64648a3d-835b-4d29-8cd8-3f34abaacc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c90c0f68-19d2-4032-9c10-7810e57b2671, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5cd3a631-9d08-4568-9b09-16bfca349289) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:26 localhost podman[329892]: 2025-12-02 10:12:26.438361499 +0000 UTC m=+0.130152699 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.438 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5cd3a631-9d08-4568-9b09-16bfca349289 in datapath 64648a3d-835b-4d29-8cd8-3f34abaacc37 bound to our chassis#033[00m Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.440 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 64648a3d-835b-4d29-8cd8-3f34abaacc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.441 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d57860-8405-4fc2-87bf-089db5206178]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:26 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00542|binding|INFO|Setting lport 5cd3a631-9d08-4568-9b09-16bfca349289 ovn-installed in OVS Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00543|binding|INFO|Setting lport 5cd3a631-9d08-4568-9b09-16bfca349289 up in Southbound Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.468 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost journal[230136]: ethtool ioctl error on tap5cd3a631-9d: No such device Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.516 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.551 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e195 e195: 6 total, 6 up, 6 in Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.834 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9683fa55-109f-4a3c-8663-60b73e691f42 with type ""#033[00m Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.836 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-64648a3d-835b-4d29-8cd8-3f34abaacc37', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64648a3d-835b-4d29-8cd8-3f34abaacc37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c90c0f68-19d2-4032-9c10-7810e57b2671, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5cd3a631-9d08-4568-9b09-16bfca349289) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00544|binding|INFO|Removing iface tap5cd3a631-9d ovn-installed in OVS Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.838 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 5cd3a631-9d08-4568-9b09-16bfca349289 in datapath 64648a3d-835b-4d29-8cd8-3f34abaacc37 unbound from our chassis#033[00m Dec 2 05:12:26 localhost ovn_controller[154505]: 2025-12-02T10:12:26Z|00545|binding|INFO|Removing lport 5cd3a631-9d08-4568-9b09-16bfca349289 ovn-installed in OVS Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.839 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 64648a3d-835b-4d29-8cd8-3f34abaacc37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:26 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:26.840 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[914e0d6a-2e0d-49f2-a30a-9c8a6e645f2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.888 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:26 localhost nova_compute[281854]: 2025-12-02 10:12:26.890 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:12:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:12:27 localhost podman[329982]: 2025-12-02 10:12:27.45794891 +0000 UTC m=+0.099170873 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, vcs-type=git) Dec 2 05:12:27 localhost podman[330002]: Dec 2 05:12:27 localhost podman[330002]: 2025-12-02 10:12:27.484891809 +0000 UTC m=+0.090823281 container create e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:12:27 localhost podman[329982]: 2025-12-02 10:12:27.495152402 +0000 UTC m=+0.136374345 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6) Dec 2 05:12:27 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:12:27 localhost systemd[1]: Started libpod-conmon-e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2.scope. Dec 2 05:12:27 localhost podman[330002]: 2025-12-02 10:12:27.447944474 +0000 UTC m=+0.053876016 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:27 localhost systemd[1]: Started libcrun container. Dec 2 05:12:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e55b32f7917d1bfc37ba3bfadf74cf4ebf2c68e08e8a4389ecaa7281d36c25b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:27 localhost podman[329983]: 2025-12-02 10:12:27.502038596 +0000 UTC m=+0.137114906 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:12:27 localhost podman[330002]: 2025-12-02 10:12:27.558604833 +0000 UTC m=+0.164536335 container init e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:27 localhost podman[330002]: 2025-12-02 10:12:27.568151008 +0000 UTC m=+0.174082510 container start e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:12:27 localhost dnsmasq[330045]: started, version 2.85 cachesize 150 Dec 2 05:12:27 localhost dnsmasq[330045]: DNS service limited to local subnets Dec 2 05:12:27 localhost dnsmasq[330045]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:27 localhost dnsmasq[330045]: warning: no upstream servers configured Dec 2 05:12:27 localhost dnsmasq-dhcp[330045]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 2 05:12:27 localhost dnsmasq[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/addn_hosts - 0 addresses Dec 2 05:12:27 localhost dnsmasq-dhcp[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/host Dec 2 05:12:27 localhost dnsmasq-dhcp[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/opts Dec 2 05:12:27 localhost podman[329983]: 2025-12-02 10:12:27.58288422 +0000 UTC m=+0.217960560 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:12:27 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.650 263406 INFO neutron.agent.dhcp.agent [None req-6c4a45a3-c7bb-422e-beae-d8715cc80aed - - - - - -] DHCP configuration for ports {'c435f09f-f557-4267-b1b5-e62d98b6c4b5'} is completed#033[00m Dec 2 05:12:27 localhost kernel: device tap5cd3a631-9d left promiscuous mode Dec 2 05:12:27 localhost nova_compute[281854]: 2025-12-02 10:12:27.665 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:27 localhost nova_compute[281854]: 2025-12-02 10:12:27.687 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e196 e196: 6 total, 6 up, 6 in Dec 2 05:12:27 localhost dnsmasq[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/addn_hosts - 0 addresses Dec 2 05:12:27 localhost dnsmasq-dhcp[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/host Dec 2 05:12:27 localhost podman[330068]: 2025-12-02 10:12:27.879962977 +0000 UTC m=+0.058757007 container kill e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:27 localhost dnsmasq-dhcp[330045]: read /var/lib/neutron/dhcp/64648a3d-835b-4d29-8cd8-3f34abaacc37/opts Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent [None req-1a475abf-8b54-4c02-9ae1-9d6f9e605242 - - - - - -] Unable to reload_allocations dhcp for 64648a3d-835b-4d29-8cd8-3f34abaacc37.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5cd3a631-9d not found in namespace qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37. Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent return fut.result() Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent raise self._exception Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5cd3a631-9d not found in namespace qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37. Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.910 263406 ERROR neutron.agent.dhcp.agent #033[00m Dec 2 05:12:27 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:27.917 263406 INFO neutron.agent.dhcp.agent [None req-ac6c4a74-9018-4989-9341-cd6278919b9d - - - - - -] Synchronizing state#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.116 263406 INFO neutron.agent.dhcp.agent [None req-171d14fd-52cf-40cb-9315-4edb7100f18b - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.117 263406 INFO neutron.agent.dhcp.agent [-] Starting network 64648a3d-835b-4d29-8cd8-3f34abaacc37 dhcp configuration#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.118 263406 INFO neutron.agent.dhcp.agent [-] Finished network 64648a3d-835b-4d29-8cd8-3f34abaacc37 dhcp configuration#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.118 263406 INFO neutron.agent.dhcp.agent [-] Starting network 7499f69a-21e4-43dd-8d90-9037f211beae dhcp configuration#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.119 263406 INFO neutron.agent.dhcp.agent [-] Finished network 7499f69a-21e4-43dd-8d90-9037f211beae dhcp configuration#033[00m Dec 2 05:12:28 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:28.119 263406 INFO neutron.agent.dhcp.agent [None req-171d14fd-52cf-40cb-9315-4edb7100f18b - - - - - -] Synchronizing state complete#033[00m Dec 2 05:12:28 localhost nova_compute[281854]: 2025-12-02 10:12:28.183 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:28 localhost ovn_controller[154505]: 2025-12-02T10:12:28Z|00546|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:28 localhost nova_compute[281854]: 2025-12-02 10:12:28.265 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:28 localhost dnsmasq[330045]: exiting on receipt of SIGTERM Dec 2 05:12:28 localhost podman[330098]: 2025-12-02 10:12:28.40433722 +0000 UTC m=+0.065186018 container kill e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:12:28 localhost systemd[1]: libpod-e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2.scope: Deactivated successfully. Dec 2 05:12:28 localhost podman[330112]: 2025-12-02 10:12:28.486304035 +0000 UTC m=+0.068443665 container died e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:12:28 localhost podman[330112]: 2025-12-02 10:12:28.524510113 +0000 UTC m=+0.106649703 container cleanup e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:12:28 localhost systemd[1]: libpod-conmon-e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2.scope: Deactivated successfully. Dec 2 05:12:28 localhost podman[330114]: 2025-12-02 10:12:28.562110136 +0000 UTC m=+0.134775764 container remove e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-64648a3d-835b-4d29-8cd8-3f34abaacc37, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:12:29 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e197 e197: 6 total, 6 up, 6 in Dec 2 05:12:29 localhost nova_compute[281854]: 2025-12-02 10:12:29.313 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:29 localhost systemd[1]: var-lib-containers-storage-overlay-5e55b32f7917d1bfc37ba3bfadf74cf4ebf2c68e08e8a4389ecaa7281d36c25b-merged.mount: Deactivated successfully. Dec 2 05:12:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e431b3e08409da0d2dafa19fffaa17db5444b42a11cdc11303acfe59acd8c0e2-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:29 localhost systemd[1]: run-netns-qdhcp\x2d64648a3d\x2d835b\x2d4d29\x2d8cd8\x2d3f34abaacc37.mount: Deactivated successfully. Dec 2 05:12:30 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e198 e198: 6 total, 6 up, 6 in Dec 2 05:12:31 localhost nova_compute[281854]: 2025-12-02 10:12:31.191 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e199 e199: 6 total, 6 up, 6 in Dec 2 05:12:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:12:31 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3972691788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:12:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:12:31 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3972691788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]}]': finished Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:12:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:12:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e200 e200: 6 total, 6 up, 6 in Dec 2 05:12:33 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:12:34 localhost openstack_network_exporter[242845]: ERROR 10:12:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:12:34 localhost openstack_network_exporter[242845]: ERROR 10:12:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:34 localhost openstack_network_exporter[242845]: ERROR 10:12:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:12:34 localhost openstack_network_exporter[242845]: ERROR 10:12:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:12:34 localhost openstack_network_exporter[242845]: Dec 2 05:12:34 localhost openstack_network_exporter[242845]: ERROR 10:12:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:12:34 localhost openstack_network_exporter[242845]: Dec 2 05:12:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch Dec 2 05:12:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch Dec 2 05:12:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]}]': finished Dec 2 05:12:35 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:35.025 2 INFO neutron.agent.securitygroups_rpc [None req-c29e5bff-b968-4a83-beb0-2b46e231db68 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:35.073 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.101 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.103 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.134 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.168 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:12:35 localhost podman[330225]: 2025-12-02 10:12:35.466098259 +0000 UTC m=+0.097084949 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 2 05:12:35 localhost podman[330225]: 2025-12-02 10:12:35.509048883 +0000 UTC m=+0.140035523 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:12:35 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:12:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:35.660 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:35.863 263406 INFO neutron.agent.linux.ip_lib [None req-816e9919-ed63-4148-b713-8cde65fae396 - - - - - -] Device tapc46392fc-2a cannot be used as it has no MAC address#033[00m Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.886 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost kernel: device tapc46392fc-2a entered promiscuous mode Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.896 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost NetworkManager[5965]: [1764670355.8982] manager: (tapc46392fc-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/86) Dec 2 05:12:35 localhost systemd-udevd[330254]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:35 localhost ovn_controller[154505]: 2025-12-02T10:12:35Z|00547|binding|INFO|Claiming lport c46392fc-2aa4-4d80-b8dd-cf511eace16e for this chassis. Dec 2 05:12:35 localhost ovn_controller[154505]: 2025-12-02T10:12:35Z|00548|binding|INFO|c46392fc-2aa4-4d80-b8dd-cf511eace16e: Claiming unknown Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.915 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-474eb989-d757-4df7-9a0f-19d414dbaf64', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-474eb989-d757-4df7-9a0f-19d414dbaf64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8265533d-51c3-4865-8bdc-d09b3aea005a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c46392fc-2aa4-4d80-b8dd-cf511eace16e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.917 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c46392fc-2aa4-4d80-b8dd-cf511eace16e in datapath 474eb989-d757-4df7-9a0f-19d414dbaf64 bound to our chassis#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.918 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 474eb989-d757-4df7-9a0f-19d414dbaf64 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:35.919 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3e9e84-b724-411a-9c71-9eaafed9c742]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.937 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.941 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost ovn_controller[154505]: 2025-12-02T10:12:35Z|00549|binding|INFO|Setting lport c46392fc-2aa4-4d80-b8dd-cf511eace16e ovn-installed in OVS Dec 2 05:12:35 localhost ovn_controller[154505]: 2025-12-02T10:12:35Z|00550|binding|INFO|Setting lport c46392fc-2aa4-4d80-b8dd-cf511eace16e up in Southbound Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:35 localhost journal[230136]: ethtool ioctl error on tapc46392fc-2a: No such device Dec 2 05:12:35 localhost nova_compute[281854]: 2025-12-02 10:12:35.980 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:36 localhost nova_compute[281854]: 2025-12-02 10:12:36.006 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:36 localhost podman[240799]: time="2025-12-02T10:12:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:12:36 localhost podman[240799]: @ - - [02/Dec/2025:10:12:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:12:36 localhost podman[240799]: @ - - [02/Dec/2025:10:12:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19258 "" "Go-http-client/1.1" Dec 2 05:12:36 localhost nova_compute[281854]: 2025-12-02 10:12:36.221 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:36.287 2 INFO neutron.agent.securitygroups_rpc [None req-11289385-b413-4a50-89a7-e0a67d214908 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']#033[00m Dec 2 05:12:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e201 e201: 6 total, 6 up, 6 in Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.646329) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356646362, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2856, "num_deletes": 268, "total_data_size": 4652381, "memory_usage": 4791136, "flush_reason": "Manual Compaction"} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356663736, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3040370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28409, "largest_seqno": 31259, "table_properties": {"data_size": 3028743, "index_size": 7492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27700, "raw_average_key_size": 22, "raw_value_size": 3004285, "raw_average_value_size": 2450, "num_data_blocks": 314, "num_entries": 1226, "num_filter_entries": 1226, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670242, "oldest_key_time": 1764670242, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 17441 microseconds, and 4089 cpu microseconds. Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.663768) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3040370 bytes OK Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.663785) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666082) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666097) EVENT_LOG_v1 {"time_micros": 1764670356666093, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4638877, prev total WAL file size 4638877, number of live WAL files 2. Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666838) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2969KB)], [48(16MB)] Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356666905, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 20006033, "oldest_snapshot_seqno": -1} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 13619 keys, 18489822 bytes, temperature: kUnknown Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356773672, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 18489822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18410882, "index_size": 43831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34053, "raw_key_size": 363795, "raw_average_key_size": 26, "raw_value_size": 18177986, "raw_average_value_size": 1334, "num_data_blocks": 1659, "num_entries": 13619, "num_filter_entries": 13619, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.773995) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 18489822 bytes Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.775994) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.2 rd, 173.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 16.2 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(12.7) write-amplify(6.1) OK, records in: 14176, records dropped: 557 output_compression: NoCompression Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.776021) EVENT_LOG_v1 {"time_micros": 1764670356776009, "job": 28, "event": "compaction_finished", "compaction_time_micros": 106870, "compaction_time_cpu_micros": 50985, "output_level": 6, "num_output_files": 1, "total_output_size": 18489822, "num_input_records": 14176, "num_output_records": 13619, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356776661, "job": 28, "event": "table_file_deletion", "file_number": 50} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356779081, "job": 28, "event": "table_file_deletion", "file_number": 48} Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.779193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.779198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.779199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.779201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:12:36.779202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:12:36 localhost podman[330325]: Dec 2 05:12:36 localhost podman[330325]: 2025-12-02 10:12:36.861342931 +0000 UTC m=+0.076176741 container create a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:12:36 localhost systemd[1]: Started libpod-conmon-a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8.scope. Dec 2 05:12:36 localhost podman[330325]: 2025-12-02 10:12:36.819528317 +0000 UTC m=+0.034362177 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:36 localhost systemd[1]: Started libcrun container. Dec 2 05:12:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0ff32bb3b0c7e43de93978e58a81d8da0d84d48e152a3ded188d7eca22b1fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:36 localhost podman[330325]: 2025-12-02 10:12:36.934061849 +0000 UTC m=+0.148895659 container init a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:12:36 localhost podman[330325]: 2025-12-02 10:12:36.945575426 +0000 UTC m=+0.160409246 container start a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:36 localhost dnsmasq[330343]: started, version 2.85 cachesize 150 Dec 2 05:12:36 localhost dnsmasq[330343]: DNS service limited to local subnets Dec 2 05:12:36 localhost dnsmasq[330343]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:36 localhost dnsmasq[330343]: warning: no upstream servers configured Dec 2 05:12:36 localhost dnsmasq-dhcp[330343]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Dec 2 05:12:36 localhost dnsmasq[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/addn_hosts - 0 addresses Dec 2 05:12:36 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/host Dec 2 05:12:36 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/opts Dec 2 05:12:36 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:36.987 263406 INFO neutron.agent.dhcp.agent [None req-816e9919-ed63-4148-b713-8cde65fae396 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:35Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f4a13a37-be3e-4104-8212-c7cc53124943, ip_allocation=immediate, mac_address=fa:16:3e:19:af:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:33Z, description=, dns_domain=, id=474eb989-d757-4df7-9a0f-19d414dbaf64, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1375377552, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31758, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2890, status=ACTIVE, subnets=['436074ec-6e3f-4c51-9fe0-3048ad5d2eb8'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:35Z, vlan_transparent=None, network_id=474eb989-d757-4df7-9a0f-19d414dbaf64, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2900, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:35Z on network 474eb989-d757-4df7-9a0f-19d414dbaf64#033[00m Dec 2 05:12:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:37.070 263406 INFO neutron.agent.dhcp.agent [None req-894e3777-1d00-47d5-8d29-055623b7ca5d - - - - - -] DHCP configuration for ports {'f1a94dfd-bb5d-4d7c-81e2-aafa79be9a4c'} is completed#033[00m Dec 2 05:12:37 localhost podman[330361]: 2025-12-02 10:12:37.184055481 +0000 UTC m=+0.057831492 container kill a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:12:37 localhost dnsmasq[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/addn_hosts - 1 addresses Dec 2 05:12:37 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/host Dec 2 05:12:37 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/opts Dec 2 05:12:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:37.410 263406 INFO neutron.agent.dhcp.agent [None req-06253fe5-d958-40ca-9b4a-09511d319b5b - - - - - -] DHCP configuration for ports {'f4a13a37-be3e-4104-8212-c7cc53124943'} is completed#033[00m Dec 2 05:12:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:37.480 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:35Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f4a13a37-be3e-4104-8212-c7cc53124943, ip_allocation=immediate, mac_address=fa:16:3e:19:af:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:33Z, description=, dns_domain=, id=474eb989-d757-4df7-9a0f-19d414dbaf64, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1375377552, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31758, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2890, status=ACTIVE, subnets=['436074ec-6e3f-4c51-9fe0-3048ad5d2eb8'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:35Z, vlan_transparent=None, network_id=474eb989-d757-4df7-9a0f-19d414dbaf64, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2900, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:35Z on network 474eb989-d757-4df7-9a0f-19d414dbaf64#033[00m Dec 2 05:12:37 localhost podman[330396]: 2025-12-02 10:12:37.673865404 +0000 UTC m=+0.052502930 container kill a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:12:37 localhost dnsmasq[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/addn_hosts - 1 addresses Dec 2 05:12:37 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/host Dec 2 05:12:37 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/opts Dec 2 05:12:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e202 e202: 6 total, 6 up, 6 in Dec 2 05:12:37 localhost nova_compute[281854]: 2025-12-02 10:12:37.844 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:37.959 263406 INFO neutron.agent.dhcp.agent [None req-a03b9791-ca49-4319-9930-7b63218c4c5e - - - - - -] DHCP configuration for ports {'f4a13a37-be3e-4104-8212-c7cc53124943'} is completed#033[00m Dec 2 05:12:37 localhost systemd[1]: tmp-crun.eCzohM.mount: Deactivated successfully. Dec 2 05:12:37 localhost podman[330435]: 2025-12-02 10:12:37.982311024 +0000 UTC m=+0.071074995 container kill e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:12:37 localhost dnsmasq[329480]: exiting on receipt of SIGTERM Dec 2 05:12:37 localhost systemd[1]: libpod-e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02.scope: Deactivated successfully. Dec 2 05:12:38 localhost podman[330448]: 2025-12-02 10:12:38.036048575 +0000 UTC m=+0.042741329 container died e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:12:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:38.104 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:12:38 localhost podman[330448]: 2025-12-02 10:12:38.1213634 +0000 UTC m=+0.128056124 container cleanup e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:12:38 localhost systemd[1]: libpod-conmon-e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02.scope: Deactivated successfully. Dec 2 05:12:38 localhost podman[330455]: 2025-12-02 10:12:38.148283607 +0000 UTC m=+0.143700011 container remove e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a6d986e-0caf-4eff-b1d3-a10e7add5365, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:12:38 localhost ovn_controller[154505]: 2025-12-02T10:12:38Z|00551|binding|INFO|Releasing lport 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 from this chassis (sb_readonly=0) Dec 2 05:12:38 localhost nova_compute[281854]: 2025-12-02 10:12:38.161 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:38 localhost kernel: device tap8b3a7663-ad left promiscuous mode Dec 2 05:12:38 localhost ovn_controller[154505]: 2025-12-02T10:12:38Z|00552|binding|INFO|Setting lport 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 down in Southbound Dec 2 05:12:38 localhost nova_compute[281854]: 2025-12-02 10:12:38.189 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:38.626 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:ffff::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-9a6d986e-0caf-4eff-b1d3-a10e7add5365', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a6d986e-0caf-4eff-b1d3-a10e7add5365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096ffa0a51b143039159efc232ec547a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fea98301-f19b-4654-8756-4655244bd809, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8b3a7663-ad72-4099-a4de-0fa85d29cfd8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:38.628 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 8b3a7663-ad72-4099-a4de-0fa85d29cfd8 in datapath 9a6d986e-0caf-4eff-b1d3-a10e7add5365 unbound from our chassis#033[00m Dec 2 05:12:38 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:38.628 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:38.629 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a6d986e-0caf-4eff-b1d3-a10e7add5365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:38 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:38.630 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[26123019-b16c-45ab-9db3-25fd2c010d9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 2 05:12:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 2 05:12:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 2 05:12:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Dec 2 05:12:38 localhost systemd[1]: var-lib-containers-storage-overlay-6b61af61b108caaf0c1a1d229e8160b6e6a2c2166342cae9dc858179db74819f-merged.mount: Deactivated successfully. Dec 2 05:12:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7aa22a26d4884b96d161141fb097bff586446e4178b03bb247630105f691f02-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:38 localhost systemd[1]: run-netns-qdhcp\x2d9a6d986e\x2d0caf\x2d4eff\x2db1d3\x2da10e7add5365.mount: Deactivated successfully. Dec 2 05:12:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:39.065 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:39 localhost ovn_controller[154505]: 2025-12-02T10:12:39Z|00553|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:39 localhost nova_compute[281854]: 2025-12-02 10:12:39.346 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:12:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:40.415 263406 INFO neutron.agent.linux.ip_lib [None req-7475a2be-70ae-4108-8633-1396fe3aec0a - - - - - -] Device tap910e474d-33 cannot be used as it has no MAC address#033[00m Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.488 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost kernel: device tap910e474d-33 entered promiscuous mode Dec 2 05:12:40 localhost ovn_controller[154505]: 2025-12-02T10:12:40Z|00554|binding|INFO|Claiming lport 910e474d-3372-448b-96c4-1b31ecbfdabc for this chassis. Dec 2 05:12:40 localhost ovn_controller[154505]: 2025-12-02T10:12:40Z|00555|binding|INFO|910e474d-3372-448b-96c4-1b31ecbfdabc: Claiming unknown Dec 2 05:12:40 localhost NetworkManager[5965]: [1764670360.4956] manager: (tap910e474d-33): new Generic device (/org/freedesktop/NetworkManager/Devices/87) Dec 2 05:12:40 localhost systemd-udevd[330518]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.495 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost podman[330479]: 2025-12-02 10:12:40.501066036 +0000 UTC m=+0.169198270 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 2 05:12:40 localhost podman[330478]: 2025-12-02 10:12:40.501812816 +0000 UTC m=+0.176308039 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:12:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:40.509 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-ead683fd-472e-432f-9d41-70d9f0a3ce59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead683fd-472e-432f-9d41-70d9f0a3ce59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bda8d64-5938-4ad9-938a-8b5e4fa77265, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=910e474d-3372-448b-96c4-1b31ecbfdabc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:40.510 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 910e474d-3372-448b-96c4-1b31ecbfdabc in datapath ead683fd-472e-432f-9d41-70d9f0a3ce59 bound to our chassis#033[00m Dec 2 05:12:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:40.511 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ead683fd-472e-432f-9d41-70d9f0a3ce59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:40.512 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb0f55d-5395-4dfa-ba4c-df20754d823a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:40 localhost ovn_controller[154505]: 2025-12-02T10:12:40Z|00556|binding|INFO|Setting lport 910e474d-3372-448b-96c4-1b31ecbfdabc ovn-installed in OVS Dec 2 05:12:40 localhost ovn_controller[154505]: 2025-12-02T10:12:40Z|00557|binding|INFO|Setting lport 910e474d-3372-448b-96c4-1b31ecbfdabc up in Southbound Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.514 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost podman[330478]: 2025-12-02 10:12:40.534364873 +0000 UTC m=+0.208860126 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.534 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost journal[230136]: ethtool ioctl error on tap910e474d-33: No such device Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.576 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost podman[330479]: 2025-12-02 10:12:40.600444595 +0000 UTC m=+0.268576829 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.599 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:40 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:12:40 localhost nova_compute[281854]: 2025-12-02 10:12:40.727 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:41 localhost nova_compute[281854]: 2025-12-02 10:12:41.224 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:41 localhost podman[330604]: Dec 2 05:12:41 localhost podman[330604]: 2025-12-02 10:12:41.448047652 +0000 UTC m=+0.085377486 container create 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:41 localhost systemd[1]: Started libpod-conmon-8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8.scope. Dec 2 05:12:41 localhost podman[330604]: 2025-12-02 10:12:41.400595027 +0000 UTC m=+0.037924841 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:41 localhost systemd[1]: Started libcrun container. Dec 2 05:12:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4e3f8bacff50ad7014797f289aea2e0fcd601ff6a98e2bf0ffd59761e472e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:41 localhost podman[330604]: 2025-12-02 10:12:41.519386453 +0000 UTC m=+0.156716257 container init 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:41 localhost podman[330604]: 2025-12-02 10:12:41.526193074 +0000 UTC m=+0.163522878 container start 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:41 localhost dnsmasq[330622]: started, version 2.85 cachesize 150 Dec 2 05:12:41 localhost dnsmasq[330622]: DNS service limited to local subnets Dec 2 05:12:41 localhost dnsmasq[330622]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:41 localhost dnsmasq[330622]: warning: no upstream servers configured Dec 2 05:12:41 localhost dnsmasq-dhcp[330622]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d Dec 2 05:12:41 localhost dnsmasq[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/addn_hosts - 0 addresses Dec 2 05:12:41 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/host Dec 2 05:12:41 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/opts Dec 2 05:12:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:41.594 263406 INFO neutron.agent.dhcp.agent [None req-7475a2be-70ae-4108-8633-1396fe3aec0a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:39Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2d6953c4-8711-483a-9c59-9af2bc2e2b9f, ip_allocation=immediate, mac_address=fa:16:3e:9a:a2:51, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:37Z, description=, dns_domain=, id=ead683fd-472e-432f-9d41-70d9f0a3ce59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1205334506, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10483, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2925, status=ACTIVE, subnets=['6694b2b3-a432-4b76-871c-299d16c7d158'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:39Z, vlan_transparent=None, network_id=ead683fd-472e-432f-9d41-70d9f0a3ce59, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2934, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:39Z on network ead683fd-472e-432f-9d41-70d9f0a3ce59#033[00m Dec 2 05:12:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:41.728 263406 INFO neutron.agent.dhcp.agent [None req-13ca9244-6b7b-4418-9791-9a2be7ffcb4c - - - - - -] DHCP configuration for ports {'7a9c7117-9f23-4d6a-bb0e-cfade268b4d0'} is completed#033[00m Dec 2 05:12:41 localhost dnsmasq[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/addn_hosts - 1 addresses Dec 2 05:12:41 localhost podman[330641]: 2025-12-02 10:12:41.774278336 +0000 UTC m=+0.055209122 container kill 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:41 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/host Dec 2 05:12:41 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/opts Dec 2 05:12:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:41.928 263406 INFO neutron.agent.dhcp.agent [None req-7475a2be-70ae-4108-8633-1396fe3aec0a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:39Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2d6953c4-8711-483a-9c59-9af2bc2e2b9f, ip_allocation=immediate, mac_address=fa:16:3e:9a:a2:51, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:37Z, description=, dns_domain=, id=ead683fd-472e-432f-9d41-70d9f0a3ce59, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1205334506, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10483, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2925, status=ACTIVE, subnets=['6694b2b3-a432-4b76-871c-299d16c7d158'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:39Z, vlan_transparent=None, network_id=ead683fd-472e-432f-9d41-70d9f0a3ce59, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2934, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:39Z on network ead683fd-472e-432f-9d41-70d9f0a3ce59#033[00m Dec 2 05:12:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:42.064 263406 INFO neutron.agent.dhcp.agent [None req-d31ab290-2f9f-410d-92b1-1f2a8570896a - - - - - -] DHCP configuration for ports {'2d6953c4-8711-483a-9c59-9af2bc2e2b9f'} is completed#033[00m Dec 2 05:12:42 localhost podman[330681]: 2025-12-02 10:12:42.121979912 +0000 UTC m=+0.040930502 container kill 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:12:42 localhost dnsmasq[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/addn_hosts - 1 addresses Dec 2 05:12:42 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/host Dec 2 05:12:42 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/opts Dec 2 05:12:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:42.318 263406 INFO neutron.agent.dhcp.agent [None req-6642ab26-c777-461d-942f-a4c0b36f6652 - - - - - -] DHCP configuration for ports {'2d6953c4-8711-483a-9c59-9af2bc2e2b9f'} is completed#033[00m Dec 2 05:12:44 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:44.506 2 INFO neutron.agent.securitygroups_rpc [None req-4efd9dba-690c-4981-8a45-b91486e5b5eb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:44 localhost nova_compute[281854]: 2025-12-02 10:12:44.780 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:45 localhost nova_compute[281854]: 2025-12-02 10:12:45.058 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:46 localhost nova_compute[281854]: 2025-12-02 10:12:46.227 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:47 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:47.113 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:47 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:47.323 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:47 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:47.809 2 INFO neutron.agent.securitygroups_rpc [None req-2a241596-c202-4fbe-a6b6-7dd1095be821 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:47.825 263406 INFO neutron.agent.linux.ip_lib [None req-0989cd2c-e333-4b90-849e-00da2af07b86 - - - - - -] Device tap63f924a2-a2 cannot be used as it has no MAC address#033[00m Dec 2 05:12:47 localhost nova_compute[281854]: 2025-12-02 10:12:47.847 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:47 localhost kernel: device tap63f924a2-a2 entered promiscuous mode Dec 2 05:12:47 localhost ovn_controller[154505]: 2025-12-02T10:12:47Z|00558|binding|INFO|Claiming lport 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a for this chassis. Dec 2 05:12:47 localhost ovn_controller[154505]: 2025-12-02T10:12:47Z|00559|binding|INFO|63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a: Claiming unknown Dec 2 05:12:47 localhost NetworkManager[5965]: [1764670367.8553] manager: (tap63f924a2-a2): new Generic device (/org/freedesktop/NetworkManager/Devices/88) Dec 2 05:12:47 localhost nova_compute[281854]: 2025-12-02 10:12:47.854 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:47 localhost systemd-udevd[330711]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:47.868 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/16', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-06e734ec-67aa-4893-acc9-29e384e3b54b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06e734ec-67aa-4893-acc9-29e384e3b54b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b02ac233ae12415688cf9d451b55b171', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d614af94-5391-4170-b5ce-0f6ef9d77e23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:47.870 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a in datapath 06e734ec-67aa-4893-acc9-29e384e3b54b bound to our chassis#033[00m Dec 2 05:12:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:47.872 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 06e734ec-67aa-4893-acc9-29e384e3b54b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:47 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:47.873 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b6bb848b-9acd-4128-b361-97e32e78a1f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost ovn_controller[154505]: 2025-12-02T10:12:47Z|00560|binding|INFO|Setting lport 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a ovn-installed in OVS Dec 2 05:12:47 localhost ovn_controller[154505]: 2025-12-02T10:12:47Z|00561|binding|INFO|Setting lport 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a up in Southbound Dec 2 05:12:47 localhost nova_compute[281854]: 2025-12-02 10:12:47.888 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost journal[230136]: ethtool ioctl error on tap63f924a2-a2: No such device Dec 2 05:12:47 localhost nova_compute[281854]: 2025-12-02 10:12:47.929 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:47 localhost nova_compute[281854]: 2025-12-02 10:12:47.965 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:48 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:48.419 2 INFO neutron.agent.securitygroups_rpc [None req-2a4c2d24-b746-4ff5-abcb-d93c9952342d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:48.702 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:48 localhost podman[330782]: Dec 2 05:12:48 localhost podman[330782]: 2025-12-02 10:12:48.878757513 +0000 UTC m=+0.141188413 container create 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:12:48 localhost systemd[1]: Started libpod-conmon-013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208.scope. Dec 2 05:12:48 localhost systemd[1]: Started libcrun container. Dec 2 05:12:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce5d90708afeefe16b42fc574fe8ffc15e78fbc6fadf28b8ac10c5802cf2141/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:48 localhost podman[330782]: 2025-12-02 10:12:48.84448601 +0000 UTC m=+0.106916910 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:48 localhost podman[330782]: 2025-12-02 10:12:48.949507598 +0000 UTC m=+0.211938518 container init 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 2 05:12:48 localhost podman[330782]: 2025-12-02 10:12:48.955665162 +0000 UTC m=+0.218096082 container start 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:48 localhost dnsmasq[330800]: started, version 2.85 cachesize 150 Dec 2 05:12:48 localhost dnsmasq[330800]: DNS service limited to local subnets Dec 2 05:12:48 localhost dnsmasq[330800]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:48 localhost dnsmasq[330800]: warning: no upstream servers configured Dec 2 05:12:48 localhost dnsmasq-dhcp[330800]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:12:48 localhost dnsmasq[330800]: read /var/lib/neutron/dhcp/06e734ec-67aa-4893-acc9-29e384e3b54b/addn_hosts - 0 addresses Dec 2 05:12:48 localhost dnsmasq-dhcp[330800]: read /var/lib/neutron/dhcp/06e734ec-67aa-4893-acc9-29e384e3b54b/host Dec 2 05:12:48 localhost dnsmasq-dhcp[330800]: read /var/lib/neutron/dhcp/06e734ec-67aa-4893-acc9-29e384e3b54b/opts Dec 2 05:12:49 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:49.125 263406 INFO neutron.agent.dhcp.agent [None req-b0b5b4c3-4d1a-4f18-9dd6-6e947985d07b - - - - - -] DHCP configuration for ports {'8bed618a-c3b3-40e6-9137-108d11128420'} is completed#033[00m Dec 2 05:12:49 localhost ovn_controller[154505]: 2025-12-02T10:12:49Z|00562|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:50 localhost nova_compute[281854]: 2025-12-02 10:12:50.021 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:50 localhost systemd[1]: tmp-crun.kIKoLH.mount: Deactivated successfully. Dec 2 05:12:50 localhost dnsmasq[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/addn_hosts - 0 addresses Dec 2 05:12:50 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/host Dec 2 05:12:50 localhost dnsmasq-dhcp[330622]: read /var/lib/neutron/dhcp/ead683fd-472e-432f-9d41-70d9f0a3ce59/opts Dec 2 05:12:50 localhost podman[330818]: 2025-12-02 10:12:50.904818735 +0000 UTC m=+0.066906644 container kill 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:51 localhost ovn_controller[154505]: 2025-12-02T10:12:51Z|00563|binding|INFO|Releasing lport 910e474d-3372-448b-96c4-1b31ecbfdabc from this chassis (sb_readonly=0) Dec 2 05:12:51 localhost ovn_controller[154505]: 2025-12-02T10:12:51Z|00564|binding|INFO|Setting lport 910e474d-3372-448b-96c4-1b31ecbfdabc down in Southbound Dec 2 05:12:51 localhost kernel: device tap910e474d-33 left promiscuous mode Dec 2 05:12:51 localhost nova_compute[281854]: 2025-12-02 10:12:51.174 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:51.187 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-ead683fd-472e-432f-9d41-70d9f0a3ce59', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ead683fd-472e-432f-9d41-70d9f0a3ce59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bda8d64-5938-4ad9-938a-8b5e4fa77265, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=910e474d-3372-448b-96c4-1b31ecbfdabc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:51.189 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 910e474d-3372-448b-96c4-1b31ecbfdabc in datapath ead683fd-472e-432f-9d41-70d9f0a3ce59 unbound from our chassis#033[00m Dec 2 05:12:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:51.191 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ead683fd-472e-432f-9d41-70d9f0a3ce59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:51.192 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[67d53b06-57bc-404b-9253-c481fe8e9e7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:51 localhost nova_compute[281854]: 2025-12-02 10:12:51.197 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:51 localhost nova_compute[281854]: 2025-12-02 10:12:51.229 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:51 localhost podman[330860]: 2025-12-02 10:12:51.724274173 +0000 UTC m=+0.061705975 container kill 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:51 localhost dnsmasq[330622]: exiting on receipt of SIGTERM Dec 2 05:12:51 localhost systemd[1]: libpod-8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8.scope: Deactivated successfully. Dec 2 05:12:51 localhost podman[330873]: 2025-12-02 10:12:51.801839989 +0000 UTC m=+0.062669010 container died 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:12:51 localhost podman[330873]: 2025-12-02 10:12:51.874366492 +0000 UTC m=+0.135195463 container cleanup 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:12:51 localhost systemd[1]: libpod-conmon-8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8.scope: Deactivated successfully. Dec 2 05:12:51 localhost systemd[1]: var-lib-containers-storage-overlay-7e4e3f8bacff50ad7014797f289aea2e0fcd601ff6a98e2bf0ffd59761e472e2-merged.mount: Deactivated successfully. Dec 2 05:12:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:51 localhost podman[330879]: 2025-12-02 10:12:51.980305775 +0000 UTC m=+0.225480259 container remove 8675761b09b43ef4adc4321a4ababcdf03f11703e5c242b5e3d9d3d70c359ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ead683fd-472e-432f-9d41-70d9f0a3ce59, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:12:52 localhost systemd[1]: run-netns-qdhcp\x2dead683fd\x2d472e\x2d432f\x2d9d41\x2d70d9f0a3ce59.mount: Deactivated successfully. Dec 2 05:12:52 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:52.010 263406 INFO neutron.agent.dhcp.agent [None req-02651b53-7c8d-421f-acf6-e280523ea510 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:52 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:52.115 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:52 localhost ovn_controller[154505]: 2025-12-02T10:12:52Z|00565|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:52 localhost nova_compute[281854]: 2025-12-02 10:12:52.363 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e203 e203: 6 total, 6 up, 6 in Dec 2 05:12:52 localhost podman[330920]: 2025-12-02 10:12:52.971282894 +0000 UTC m=+0.045645747 container kill a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:52 localhost dnsmasq[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/addn_hosts - 0 addresses Dec 2 05:12:52 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/host Dec 2 05:12:52 localhost dnsmasq-dhcp[330343]: read /var/lib/neutron/dhcp/474eb989-d757-4df7-9a0f-19d414dbaf64/opts Dec 2 05:12:53 localhost ovn_controller[154505]: 2025-12-02T10:12:53Z|00566|binding|INFO|Releasing lport c46392fc-2aa4-4d80-b8dd-cf511eace16e from this chassis (sb_readonly=0) Dec 2 05:12:53 localhost kernel: device tapc46392fc-2a left promiscuous mode Dec 2 05:12:53 localhost ovn_controller[154505]: 2025-12-02T10:12:53Z|00567|binding|INFO|Setting lport c46392fc-2aa4-4d80-b8dd-cf511eace16e down in Southbound Dec 2 05:12:53 localhost nova_compute[281854]: 2025-12-02 10:12:53.150 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:53.168 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-474eb989-d757-4df7-9a0f-19d414dbaf64', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-474eb989-d757-4df7-9a0f-19d414dbaf64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8265533d-51c3-4865-8bdc-d09b3aea005a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=c46392fc-2aa4-4d80-b8dd-cf511eace16e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:53.170 160221 INFO neutron.agent.ovn.metadata.agent [-] Port c46392fc-2aa4-4d80-b8dd-cf511eace16e in datapath 474eb989-d757-4df7-9a0f-19d414dbaf64 unbound from our chassis#033[00m Dec 2 05:12:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:53.172 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 474eb989-d757-4df7-9a0f-19d414dbaf64 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:53.173 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[b0cc3405-eb6b-4873-8a94-2ccce94a211d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:53 localhost nova_compute[281854]: 2025-12-02 10:12:53.179 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:12:53 localhost podman[330941]: 2025-12-02 10:12:53.458711494 +0000 UTC m=+0.092288361 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute) Dec 2 05:12:53 localhost podman[330941]: 2025-12-02 10:12:53.472181963 +0000 UTC m=+0.105758830 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:12:53 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:12:53 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e204 e204: 6 total, 6 up, 6 in Dec 2 05:12:53 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:53.791 2 INFO neutron.agent.securitygroups_rpc [None req-099086ea-fe05-4147-b538-a150ea38436a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:54 localhost podman[330976]: 2025-12-02 10:12:54.194523682 +0000 UTC m=+0.058537531 container kill a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:12:54 localhost dnsmasq[330343]: exiting on receipt of SIGTERM Dec 2 05:12:54 localhost systemd[1]: libpod-a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8.scope: Deactivated successfully. Dec 2 05:12:54 localhost podman[330989]: 2025-12-02 10:12:54.259771221 +0000 UTC m=+0.052001257 container died a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:54 localhost podman[330989]: 2025-12-02 10:12:54.344289063 +0000 UTC m=+0.136519069 container cleanup a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:54 localhost systemd[1]: libpod-conmon-a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8.scope: Deactivated successfully. Dec 2 05:12:54 localhost podman[330991]: 2025-12-02 10:12:54.371763986 +0000 UTC m=+0.155179287 container remove a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-474eb989-d757-4df7-9a0f-19d414dbaf64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:54.572 263406 INFO neutron.agent.dhcp.agent [None req-50e02f86-2447-46a4-8ec6-09ae40214694 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e205 e205: 6 total, 6 up, 6 in Dec 2 05:12:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:54.733 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:55 localhost neutron_sriov_agent[256494]: 2025-12-02 10:12:55.086 2 INFO neutron.agent.securitygroups_rpc [None req-51b03213-9479-4e3e-bbd0-ea81e004b8c6 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:12:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:55.143 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:55 localhost systemd[1]: var-lib-containers-storage-overlay-fc0ff32bb3b0c7e43de93978e58a81d8da0d84d48e152a3ded188d7eca22b1fd-merged.mount: Deactivated successfully. Dec 2 05:12:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a36aace63e53e676cd7466ec343081260f54631e9ace4245db330b9311db77c8-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:55 localhost systemd[1]: run-netns-qdhcp\x2d474eb989\x2dd757\x2d4df7\x2d9a0f\x2d19d414dbaf64.mount: Deactivated successfully. Dec 2 05:12:55 localhost systemd[1]: tmp-crun.SR6N5z.mount: Deactivated successfully. Dec 2 05:12:55 localhost dnsmasq[330800]: exiting on receipt of SIGTERM Dec 2 05:12:55 localhost podman[331036]: 2025-12-02 10:12:55.258996739 +0000 UTC m=+0.078226445 container kill 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 2 05:12:55 localhost systemd[1]: libpod-013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208.scope: Deactivated successfully. Dec 2 05:12:55 localhost podman[331050]: 2025-12-02 10:12:55.335323214 +0000 UTC m=+0.066008941 container died 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:55 localhost podman[331050]: 2025-12-02 10:12:55.367037189 +0000 UTC m=+0.097722836 container cleanup 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:12:55 localhost systemd[1]: libpod-conmon-013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208.scope: Deactivated successfully. Dec 2 05:12:55 localhost podman[331052]: 2025-12-02 10:12:55.422373103 +0000 UTC m=+0.142769355 container remove 013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-06e734ec-67aa-4893-acc9-29e384e3b54b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:12:55 localhost nova_compute[281854]: 2025-12-02 10:12:55.464 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:55 localhost kernel: device tap63f924a2-a2 left promiscuous mode Dec 2 05:12:55 localhost ovn_controller[154505]: 2025-12-02T10:12:55Z|00568|binding|INFO|Releasing lport 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a from this chassis (sb_readonly=0) Dec 2 05:12:55 localhost ovn_controller[154505]: 2025-12-02T10:12:55Z|00569|binding|INFO|Setting lport 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a down in Southbound Dec 2 05:12:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:55.476 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/16', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-06e734ec-67aa-4893-acc9-29e384e3b54b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06e734ec-67aa-4893-acc9-29e384e3b54b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b02ac233ae12415688cf9d451b55b171', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d614af94-5391-4170-b5ce-0f6ef9d77e23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:55.478 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 63f924a2-a2a4-4cfe-a211-cbc03e5f1e6a in datapath 06e734ec-67aa-4893-acc9-29e384e3b54b unbound from our chassis#033[00m Dec 2 05:12:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:55.481 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06e734ec-67aa-4893-acc9-29e384e3b54b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:12:55 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:55.482 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb4a782-1793-4f8b-87a4-e3e25e1a5220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:55 localhost nova_compute[281854]: 2025-12-02 10:12:55.486 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:55 localhost ovn_controller[154505]: 2025-12-02T10:12:55Z|00570|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:12:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:55.619 263406 INFO neutron.agent.dhcp.agent [None req-43365a34-7e51-47b7-98ba-33708ce982ca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:55.621 263406 INFO neutron.agent.dhcp.agent [None req-43365a34-7e51-47b7-98ba-33708ce982ca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:55 localhost nova_compute[281854]: 2025-12-02 10:12:55.628 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:55 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e206 e206: 6 total, 6 up, 6 in Dec 2 05:12:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:55.928 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:12:56 localhost systemd[1]: var-lib-containers-storage-overlay-1ce5d90708afeefe16b42fc574fe8ffc15e78fbc6fadf28b8ac10c5802cf2141-merged.mount: Deactivated successfully. Dec 2 05:12:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-013abab66e6cef5fba071da9ad9a421e1736c8950ee0cc8858d0e59d9c6bd208-userdata-shm.mount: Deactivated successfully. Dec 2 05:12:56 localhost systemd[1]: run-netns-qdhcp\x2d06e734ec\x2d67aa\x2d4893\x2dacc9\x2d29e384e3b54b.mount: Deactivated successfully. Dec 2 05:12:56 localhost nova_compute[281854]: 2025-12-02 10:12:56.232 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:12:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e207 e207: 6 total, 6 up, 6 in Dec 2 05:12:56 localhost nova_compute[281854]: 2025-12-02 10:12:56.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:12:57 localhost podman[331077]: 2025-12-02 10:12:57.436357634 +0000 UTC m=+0.079075679 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:12:57 localhost podman[331077]: 2025-12-02 10:12:57.467162535 +0000 UTC m=+0.109880580 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:12:57 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:12:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e208 e208: 6 total, 6 up, 6 in Dec 2 05:12:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:12:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:12:58 localhost podman[331094]: 2025-12-02 10:12:58.442523958 +0000 UTC m=+0.084402180 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=) Dec 2 05:12:58 localhost podman[331094]: 2025-12-02 10:12:58.484022874 +0000 UTC m=+0.125901036 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible) Dec 2 05:12:58 localhost podman[331095]: 2025-12-02 10:12:58.496843105 +0000 UTC m=+0.131822114 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:12:58 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:12:58 localhost podman[331095]: 2025-12-02 10:12:58.510223572 +0000 UTC m=+0.145202601 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:12:58 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:12:58 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e209 e209: 6 total, 6 up, 6 in Dec 2 05:12:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:12:58.837 263406 INFO neutron.agent.linux.ip_lib [None req-789dd8d6-e996-4ca0-8760-61ecbd03a21d - - - - - -] Device tap0eafd6e9-4e cannot be used as it has no MAC address#033[00m Dec 2 05:12:58 localhost nova_compute[281854]: 2025-12-02 10:12:58.888 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:58 localhost kernel: device tap0eafd6e9-4e entered promiscuous mode Dec 2 05:12:58 localhost NetworkManager[5965]: [1764670378.8969] manager: (tap0eafd6e9-4e): new Generic device (/org/freedesktop/NetworkManager/Devices/89) Dec 2 05:12:58 localhost systemd-udevd[331145]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:12:58 localhost nova_compute[281854]: 2025-12-02 10:12:58.903 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:58 localhost ovn_controller[154505]: 2025-12-02T10:12:58Z|00571|binding|INFO|Claiming lport 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 for this chassis. Dec 2 05:12:58 localhost ovn_controller[154505]: 2025-12-02T10:12:58Z|00572|binding|INFO|0eafd6e9-4ed5-4679-82a5-3fa515f3ef91: Claiming unknown Dec 2 05:12:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:58.917 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c0f42f6a-43c1-485f-b58f-6f747f110eb9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0f42f6a-43c1-485f-b58f-6f747f110eb9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14bea8ce-4017-4cce-9625-2e1c4e2524ca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0eafd6e9-4ed5-4679-82a5-3fa515f3ef91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:12:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:58.919 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 in datapath c0f42f6a-43c1-485f-b58f-6f747f110eb9 bound to our chassis#033[00m Dec 2 05:12:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:58.921 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c0f42f6a-43c1-485f-b58f-6f747f110eb9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:12:58 localhost ovn_metadata_agent[160216]: 2025-12-02 10:12:58.921 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[82561596-d9bd-448f-9d5f-16bc1816ebbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost ovn_controller[154505]: 2025-12-02T10:12:58Z|00573|binding|INFO|Setting lport 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 ovn-installed in OVS Dec 2 05:12:58 localhost ovn_controller[154505]: 2025-12-02T10:12:58Z|00574|binding|INFO|Setting lport 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 up in Southbound Dec 2 05:12:58 localhost nova_compute[281854]: 2025-12-02 10:12:58.941 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost journal[230136]: ethtool ioctl error on tap0eafd6e9-4e: No such device Dec 2 05:12:58 localhost nova_compute[281854]: 2025-12-02 10:12:58.973 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:59 localhost nova_compute[281854]: 2025-12-02 10:12:59.010 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:12:59 localhost podman[331216]: Dec 2 05:12:59 localhost podman[331216]: 2025-12-02 10:12:59.899253908 +0000 UTC m=+0.099556444 container create 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:12:59 localhost systemd[1]: Started libpod-conmon-0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3.scope. Dec 2 05:12:59 localhost podman[331216]: 2025-12-02 10:12:59.848558417 +0000 UTC m=+0.048861013 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:12:59 localhost systemd[1]: tmp-crun.VYBSaP.mount: Deactivated successfully. Dec 2 05:12:59 localhost systemd[1]: Started libcrun container. Dec 2 05:12:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911e3bfa49476741920b436c07a5e2f1b04d6d325f1750f3ae03d6c86d880d03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:12:59 localhost podman[331216]: 2025-12-02 10:12:59.976169158 +0000 UTC m=+0.176471694 container init 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:12:59 localhost podman[331216]: 2025-12-02 10:12:59.987355296 +0000 UTC m=+0.187657832 container start 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:12:59 localhost dnsmasq[331234]: started, version 2.85 cachesize 150 Dec 2 05:12:59 localhost dnsmasq[331234]: DNS service limited to local subnets Dec 2 05:12:59 localhost dnsmasq[331234]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:12:59 localhost dnsmasq[331234]: warning: no upstream servers configured Dec 2 05:12:59 localhost dnsmasq-dhcp[331234]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:12:59 localhost dnsmasq[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/addn_hosts - 0 addresses Dec 2 05:12:59 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/host Dec 2 05:12:59 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/opts Dec 2 05:13:00 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:00.177 263406 INFO neutron.agent.dhcp.agent [None req-65c51da1-bb7a-4031-858c-2702d92c92b4 - - - - - -] DHCP configuration for ports {'aa61ce72-44d2-4708-b516-008529f36e5b'} is completed#033[00m Dec 2 05:13:00 localhost nova_compute[281854]: 2025-12-02 10:13:00.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:00 localhost nova_compute[281854]: 2025-12-02 10:13:00.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:13:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e210 e210: 6 total, 6 up, 6 in Dec 2 05:13:01 localhost nova_compute[281854]: 2025-12-02 10:13:01.235 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e211 e211: 6 total, 6 up, 6 in Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.667165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381667231, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 748, "num_deletes": 261, "total_data_size": 737478, "memory_usage": 751592, "flush_reason": "Manual Compaction"} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381672811, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 482171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31264, "largest_seqno": 32007, "table_properties": {"data_size": 478562, "index_size": 1400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9028, "raw_average_key_size": 19, "raw_value_size": 470871, "raw_average_value_size": 1037, "num_data_blocks": 60, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670356, "oldest_key_time": 1764670356, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 5683 microseconds, and 2270 cpu microseconds. Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.672854) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 482171 bytes OK Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.672874) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.676025) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.676048) EVENT_LOG_v1 {"time_micros": 1764670381676042, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.676070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 733301, prev total WAL file size 733301, number of live WAL files 2. Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.676815) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end) Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(470KB)], [51(17MB)] Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381676880, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 18971993, "oldest_snapshot_seqno": -1} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 13531 keys, 18454636 bytes, temperature: kUnknown Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381784239, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 18454636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18376453, "index_size": 43266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33861, "raw_key_size": 363209, "raw_average_key_size": 26, "raw_value_size": 18145232, "raw_average_value_size": 1341, "num_data_blocks": 1625, "num_entries": 13531, "num_filter_entries": 13531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.784554) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 18454636 bytes Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.786332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 171.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 17.6 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(77.6) write-amplify(38.3) OK, records in: 14073, records dropped: 542 output_compression: NoCompression Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.786365) EVENT_LOG_v1 {"time_micros": 1764670381786351, "job": 30, "event": "compaction_finished", "compaction_time_micros": 107458, "compaction_time_cpu_micros": 51577, "output_level": 6, "num_output_files": 1, "total_output_size": 18454636, "num_input_records": 14073, "num_output_records": 13531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381786787, "job": 30, "event": "table_file_deletion", "file_number": 53} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381789461, "job": 30, "event": "table_file_deletion", "file_number": 51} Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.676599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.789586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.789594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.789598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.789602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:13:01.789606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:13:01 localhost ovn_controller[154505]: 2025-12-02T10:13:01Z|00575|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:01 localhost nova_compute[281854]: 2025-12-02 10:13:01.883 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:01 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:01.925 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:01Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9d3551a6-415a-4192-871f-74c3fa9d8968, ip_allocation=immediate, mac_address=fa:16:3e:69:65:d2, name=tempest-PortsTestJSON-1035181662, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:57Z, description=, dns_domain=, id=c0f42f6a-43c1-485f-b58f-6f747f110eb9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-474863926, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14478, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3002, status=ACTIVE, subnets=['5fceb30f-da49-41a2-b0c9-d0d13330eb82'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:12:57Z, vlan_transparent=None, network_id=c0f42f6a-43c1-485f-b58f-6f747f110eb9, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3023, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:01Z on network c0f42f6a-43c1-485f-b58f-6f747f110eb9#033[00m Dec 2 05:13:02 localhost systemd[1]: tmp-crun.QfcvDU.mount: Deactivated successfully. Dec 2 05:13:02 localhost dnsmasq[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/addn_hosts - 1 addresses Dec 2 05:13:02 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/host Dec 2 05:13:02 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/opts Dec 2 05:13:02 localhost podman[331252]: 2025-12-02 10:13:02.14218027 +0000 UTC m=+0.056965600 container kill 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:13:02 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:02.342 263406 INFO neutron.agent.dhcp.agent [None req-ab85030f-0ef5-44bf-ac9a-e8f5e365dcf7 - - - - - -] DHCP configuration for ports {'9d3551a6-415a-4192-871f-74c3fa9d8968'} is completed#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:13:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e212 e212: 6 total, 6 up, 6 in Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.978 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.979 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.979 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:13:02 localhost nova_compute[281854]: 2025-12-02 10:13:02.980 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:13:02 localhost systemd[1]: tmp-crun.KTOEGJ.mount: Deactivated successfully. Dec 2 05:13:02 localhost dnsmasq[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/addn_hosts - 0 addresses Dec 2 05:13:02 localhost podman[331291]: 2025-12-02 10:13:02.988092372 +0000 UTC m=+0.068292410 container kill 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:13:02 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/host Dec 2 05:13:02 localhost dnsmasq-dhcp[331234]: read /var/lib/neutron/dhcp/c0f42f6a-43c1-485f-b58f-6f747f110eb9/opts Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.056 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.057 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.058 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:03 localhost dnsmasq[331234]: exiting on receipt of SIGTERM Dec 2 05:13:03 localhost podman[331329]: 2025-12-02 10:13:03.544005677 +0000 UTC m=+0.058032978 container kill 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:13:03 localhost systemd[1]: libpod-0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3.scope: Deactivated successfully. Dec 2 05:13:03 localhost ovn_controller[154505]: 2025-12-02T10:13:03Z|00576|binding|INFO|Removing iface tap0eafd6e9-4e ovn-installed in OVS Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.549 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8506d660-1e86-469c-a0f8-c15c752515df with type ""#033[00m Dec 2 05:13:03 localhost ovn_controller[154505]: 2025-12-02T10:13:03Z|00577|binding|INFO|Removing lport 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 ovn-installed in OVS Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.551 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c0f42f6a-43c1-485f-b58f-6f747f110eb9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0f42f6a-43c1-485f-b58f-6f747f110eb9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14bea8ce-4017-4cce-9625-2e1c4e2524ca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0eafd6e9-4ed5-4679-82a5-3fa515f3ef91) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.550 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.553 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 0eafd6e9-4ed5-4679-82a5-3fa515f3ef91 in datapath c0f42f6a-43c1-485f-b58f-6f747f110eb9 unbound from our chassis#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.556 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0f42f6a-43c1-485f-b58f-6f747f110eb9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.558 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:03.557 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8ba699-dcbd-4247-9946-016ab632b3dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:03 localhost podman[331341]: 2025-12-02 10:13:03.619470238 +0000 UTC m=+0.061431218 container died 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:13:03 localhost systemd[1]: tmp-crun.c1vorU.mount: Deactivated successfully. Dec 2 05:13:03 localhost podman[331341]: 2025-12-02 10:13:03.712479537 +0000 UTC m=+0.154440467 container cleanup 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:13:03 localhost systemd[1]: libpod-conmon-0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3.scope: Deactivated successfully. Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.734 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:13:03 localhost podman[331348]: 2025-12-02 10:13:03.735931032 +0000 UTC m=+0.159077550 container remove 0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c0f42f6a-43c1-485f-b58f-6f747f110eb9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:03 localhost kernel: device tap0eafd6e9-4e left promiscuous mode Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.749 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.750 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.750 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.752 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.768 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.774 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.775 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.775 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.776 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:13:03 localhost nova_compute[281854]: 2025-12-02 10:13:03.776 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:13:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:03.795 263406 INFO neutron.agent.dhcp.agent [None req-d0c861b6-4f2f-40e5-a824-2429d60fa29c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:03 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:03.798 263406 INFO neutron.agent.dhcp.agent [None req-d0c861b6-4f2f-40e5-a824-2429d60fa29c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:04 localhost openstack_network_exporter[242845]: ERROR 10:13:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:13:04 localhost openstack_network_exporter[242845]: ERROR 10:13:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:04 localhost openstack_network_exporter[242845]: ERROR 10:13:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:04 localhost openstack_network_exporter[242845]: ERROR 10:13:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:13:04 localhost openstack_network_exporter[242845]: Dec 2 05:13:04 localhost openstack_network_exporter[242845]: ERROR 10:13:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:13:04 localhost openstack_network_exporter[242845]: Dec 2 05:13:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:13:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1647211614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.346 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:13:04 localhost ovn_controller[154505]: 2025-12-02T10:13:04Z|00578|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.405 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.405 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.413 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:04 localhost systemd[1]: var-lib-containers-storage-overlay-911e3bfa49476741920b436c07a5e2f1b04d6d325f1750f3ae03d6c86d880d03-merged.mount: Deactivated successfully. Dec 2 05:13:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b7cb64389e578053d4e84fe477f9693ceb84b709a9d41d8a7372798b604b5a3-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:04 localhost systemd[1]: run-netns-qdhcp\x2dc0f42f6a\x2d43c1\x2d485f\x2db58f\x2d6f747f110eb9.mount: Deactivated successfully. Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.586 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.587 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11156MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.587 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.587 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:13:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:04.690 2 INFO neutron.agent.securitygroups_rpc [None req-87ab1b98-cb82-4479-9190-360042c3aeed 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.803 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.804 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.804 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:13:04 localhost nova_compute[281854]: 2025-12-02 10:13:04.937 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:13:04 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:04.986 2 INFO neutron.agent.securitygroups_rpc [None req-c2e2f39f-f9d0-4681-8662-02fbec20bbf5 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:05.002 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e213 e213: 6 total, 6 up, 6 in Dec 2 05:13:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:13:05 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3953452699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:13:05 localhost nova_compute[281854]: 2025-12-02 10:13:05.398 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:13:05 localhost nova_compute[281854]: 2025-12-02 10:13:05.405 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:13:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:05.407 2 INFO neutron.agent.securitygroups_rpc [None req-2bcc8101-e89e-4e48-9cb1-9d691b1fbb0a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:05 localhost nova_compute[281854]: 2025-12-02 10:13:05.420 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:13:05 localhost nova_compute[281854]: 2025-12-02 10:13:05.423 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:13:05 localhost nova_compute[281854]: 2025-12-02 10:13:05.423 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:13:05 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:05.766 2 INFO neutron.agent.securitygroups_rpc [None req-af45566d-4b46-4c57-afce-d4fa5a30fbdd 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:05 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:05.784 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:06 localhost podman[240799]: time="2025-12-02T10:13:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:13:06 localhost podman[240799]: @ - - [02/Dec/2025:10:13:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:13:06 localhost podman[240799]: @ - - [02/Dec/2025:10:13:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18780 "" "Go-http-client/1.1" Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.239 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:06 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:06.261 2 INFO neutron.agent.securitygroups_rpc [None req-4587ba21-504d-4c76-ba2e-bc511139041d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:13:06 localhost podman[331417]: 2025-12-02 10:13:06.437065155 +0000 UTC m=+0.076840239 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:13:06 localhost podman[331417]: 2025-12-02 10:13:06.454633823 +0000 UTC m=+0.094408887 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:13:06 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:13:06 localhost ovn_controller[154505]: 2025-12-02T10:13:06Z|00579|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.631 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e214 e214: 6 total, 6 up, 6 in Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.829 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.829 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:06 localhost nova_compute[281854]: 2025-12-02 10:13:06.830 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:13:07 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:07.212 2 INFO neutron.agent.securitygroups_rpc [None req-5417d582-9df9-4660-b1a1-ab7e1fcb97ff 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:07.237 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:07 localhost nova_compute[281854]: 2025-12-02 10:13:07.871 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:07 localhost nova_compute[281854]: 2025-12-02 10:13:07.872 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:07 localhost nova_compute[281854]: 2025-12-02 10:13:07.873 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:13:07 localhost nova_compute[281854]: 2025-12-02 10:13:07.938 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:13:08 localhost nova_compute[281854]: 2025-12-02 10:13:08.893 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e215 e215: 6 total, 6 up, 6 in Dec 2 05:13:09 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:09.284 263406 INFO neutron.agent.linux.ip_lib [None req-a77f70ce-56fc-404e-8c2b-e755c3e8b3be - - - - - -] Device tapdf392c2e-28 cannot be used as it has no MAC address#033[00m Dec 2 05:13:09 localhost nova_compute[281854]: 2025-12-02 10:13:09.308 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:09 localhost kernel: device tapdf392c2e-28 entered promiscuous mode Dec 2 05:13:09 localhost NetworkManager[5965]: [1764670389.3157] manager: (tapdf392c2e-28): new Generic device (/org/freedesktop/NetworkManager/Devices/90) Dec 2 05:13:09 localhost ovn_controller[154505]: 2025-12-02T10:13:09Z|00580|binding|INFO|Claiming lport df392c2e-2825-4eb5-b487-3c55b6dc044b for this chassis. Dec 2 05:13:09 localhost ovn_controller[154505]: 2025-12-02T10:13:09Z|00581|binding|INFO|df392c2e-2825-4eb5-b487-3c55b6dc044b: Claiming unknown Dec 2 05:13:09 localhost systemd-udevd[331447]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:13:09 localhost nova_compute[281854]: 2025-12-02 10:13:09.319 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:09.327 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39bb33c3-24c9-42a5-b452-f2a8901739e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df392c2e-2825-4eb5-b487-3c55b6dc044b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:09.329 160221 INFO neutron.agent.ovn.metadata.agent [-] Port df392c2e-2825-4eb5-b487-3c55b6dc044b in datapath e625cddc-8a19-4455-8def-acda09527180 bound to our chassis#033[00m Dec 2 05:13:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:09.330 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e625cddc-8a19-4455-8def-acda09527180 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:13:09 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:09.331 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[178b507e-ac4b-4382-912e-628ad2a28035]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost ovn_controller[154505]: 2025-12-02T10:13:09Z|00582|binding|INFO|Setting lport df392c2e-2825-4eb5-b487-3c55b6dc044b ovn-installed in OVS Dec 2 05:13:09 localhost ovn_controller[154505]: 2025-12-02T10:13:09Z|00583|binding|INFO|Setting lport df392c2e-2825-4eb5-b487-3c55b6dc044b up in Southbound Dec 2 05:13:09 localhost nova_compute[281854]: 2025-12-02 10:13:09.353 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost journal[230136]: ethtool ioctl error on tapdf392c2e-28: No such device Dec 2 05:13:09 localhost nova_compute[281854]: 2025-12-02 10:13:09.390 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:09 localhost nova_compute[281854]: 2025-12-02 10:13:09.422 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:10 localhost podman[331518]: Dec 2 05:13:10 localhost podman[331518]: 2025-12-02 10:13:10.288222475 +0000 UTC m=+0.075982626 container create 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:13:10 localhost systemd[1]: Started libpod-conmon-5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601.scope. Dec 2 05:13:10 localhost systemd[1]: tmp-crun.AW8p4s.mount: Deactivated successfully. Dec 2 05:13:10 localhost podman[331518]: 2025-12-02 10:13:10.244389727 +0000 UTC m=+0.032149948 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:10 localhost systemd[1]: Started libcrun container. Dec 2 05:13:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66ae28acad9356d960c9af761227f8a8a17b603705a375c63844eea0d7621df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:10 localhost podman[331518]: 2025-12-02 10:13:10.377793102 +0000 UTC m=+0.165553253 container init 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:13:10 localhost podman[331518]: 2025-12-02 10:13:10.384960383 +0000 UTC m=+0.172720564 container start 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:13:10 localhost dnsmasq[331536]: started, version 2.85 cachesize 150 Dec 2 05:13:10 localhost dnsmasq[331536]: DNS service limited to local subnets Dec 2 05:13:10 localhost dnsmasq[331536]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:10 localhost dnsmasq[331536]: warning: no upstream servers configured Dec 2 05:13:10 localhost dnsmasq-dhcp[331536]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:10 localhost dnsmasq[331536]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 0 addresses Dec 2 05:13:10 localhost dnsmasq-dhcp[331536]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:10 localhost dnsmasq-dhcp[331536]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:10 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:10.560 263406 INFO neutron.agent.dhcp.agent [None req-0e179484-72bb-43d3-87a5-c1213fc25ddf - - - - - -] DHCP configuration for ports {'f6a41283-fc6f-4680-bb13-d9b38f4d32ad'} is completed#033[00m Dec 2 05:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:13:11 localhost podman[331548]: 2025-12-02 10:13:11.228745028 +0000 UTC m=+0.108028729 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:11 localhost nova_compute[281854]: 2025-12-02 10:13:11.243 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:11 localhost podman[331574]: 2025-12-02 10:13:11.24718167 +0000 UTC m=+0.061518510 container kill 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:13:11 localhost dnsmasq[331536]: exiting on receipt of SIGTERM Dec 2 05:13:11 localhost systemd[1]: libpod-5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601.scope: Deactivated successfully. Dec 2 05:13:11 localhost podman[331546]: 2025-12-02 10:13:11.206210418 +0000 UTC m=+0.090399420 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:13:11 localhost podman[331548]: 2025-12-02 10:13:11.267488181 +0000 UTC m=+0.146771842 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:13:11 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:13:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:11 localhost podman[331546]: 2025-12-02 10:13:11.288974604 +0000 UTC m=+0.173163626 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:13:11 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:13:11 localhost podman[331614]: 2025-12-02 10:13:11.328060116 +0000 UTC m=+0.060612517 container died 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:13:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:11 localhost systemd[1]: var-lib-containers-storage-overlay-a66ae28acad9356d960c9af761227f8a8a17b603705a375c63844eea0d7621df-merged.mount: Deactivated successfully. Dec 2 05:13:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:11.354 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39bb33c3-24c9-42a5-b452-f2a8901739e7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f6a41283-fc6f-4680-bb13-d9b38f4d32ad) old=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:11.357 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f6a41283-fc6f-4680-bb13-d9b38f4d32ad in datapath e625cddc-8a19-4455-8def-acda09527180 updated#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:11.361 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port 72691af4-7c63-4983-b608-e1485f951d79 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:11.362 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e625cddc-8a19-4455-8def-acda09527180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:11 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:11.363 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[88eac451-4022-4ab2-a8e2-da9488e2dd79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:11 localhost podman[331614]: 2025-12-02 10:13:11.368745829 +0000 UTC m=+0.101298160 container remove 5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:13:11 localhost systemd[1]: libpod-conmon-5ec97774e8c59a213cb27abffc1bdf99d569ecadc5b8c88ddeb4534b0f5a9601.scope: Deactivated successfully. Dec 2 05:13:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e216 e216: 6 total, 6 up, 6 in Dec 2 05:13:12 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:12.476 2 INFO neutron.agent.securitygroups_rpc [None req-7de79900-a7b1-40d4-917a-526ff9de5d92 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:12 localhost podman[331691]: Dec 2 05:13:12 localhost podman[331691]: 2025-12-02 10:13:12.737354662 +0000 UTC m=+0.091365746 container create bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:13:12 localhost systemd[1]: Started libpod-conmon-bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a.scope. Dec 2 05:13:12 localhost systemd[1]: Started libcrun container. Dec 2 05:13:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f931dc05886cc5f2c688b7d673d13227eb02357be8cf59904bb71f31ec32857/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:12 localhost podman[331691]: 2025-12-02 10:13:12.692874786 +0000 UTC m=+0.046885910 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:12 localhost podman[331691]: 2025-12-02 10:13:12.803031962 +0000 UTC m=+0.157043036 container init bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:13:12 localhost podman[331691]: 2025-12-02 10:13:12.812302548 +0000 UTC m=+0.166313632 container start bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:13:12 localhost dnsmasq[331709]: started, version 2.85 cachesize 150 Dec 2 05:13:12 localhost dnsmasq[331709]: DNS service limited to local subnets Dec 2 05:13:12 localhost dnsmasq[331709]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:12 localhost dnsmasq[331709]: warning: no upstream servers configured Dec 2 05:13:12 localhost dnsmasq-dhcp[331709]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 2 05:13:12 localhost dnsmasq-dhcp[331709]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:12 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 0 addresses Dec 2 05:13:12 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:12 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:12 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:12.873 263406 INFO neutron.agent.dhcp.agent [None req-08f2df75-5a63-4b3b-bc5f-1a33698df058 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:12Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=654fad6f-8756-4ac9-a018-496dc04a2b32, ip_allocation=immediate, mac_address=fa:16:3e:2d:d1:84, name=tempest-PortsTestJSON-1309543666, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:13:07Z, description=, dns_domain=, id=e625cddc-8a19-4455-8def-acda09527180, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-235371861, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=26299, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=3047, status=ACTIVE, subnets=['3043535f-39cc-4332-a833-e57fad08ebc2', 'f691f97d-c7e9-4344-88fe-0164a1ed278d'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:10Z, vlan_transparent=None, network_id=e625cddc-8a19-4455-8def-acda09527180, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3069, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:12Z on network e625cddc-8a19-4455-8def-acda09527180#033[00m Dec 2 05:13:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:13.014 2 INFO neutron.agent.securitygroups_rpc [None req-bfd5de53-2a30-4cbd-a16d-14779023f72a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:13.037 263406 INFO neutron.agent.dhcp.agent [None req-fc15c30d-eb4a-4b9c-8271-29bab936e9ea - - - - - -] DHCP configuration for ports {'f6a41283-fc6f-4680-bb13-d9b38f4d32ad', 'df392c2e-2825-4eb5-b487-3c55b6dc044b'} is completed#033[00m Dec 2 05:13:13 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 2 addresses Dec 2 05:13:13 localhost podman[331727]: 2025-12-02 10:13:13.156645305 +0000 UTC m=+0.063575164 container kill bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:13 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:13 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e217 e217: 6 total, 6 up, 6 in Dec 2 05:13:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:13.308 263406 INFO neutron.agent.dhcp.agent [None req-382fd06d-1133-4aeb-92b2-e117a8fab261 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:12Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=654fad6f-8756-4ac9-a018-496dc04a2b32, ip_allocation=immediate, mac_address=fa:16:3e:2d:d1:84, name=tempest-PortsTestJSON-1309543666, network_id=e625cddc-8a19-4455-8def-acda09527180, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3069, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:12Z on network e625cddc-8a19-4455-8def-acda09527180#033[00m Dec 2 05:13:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:13.406 263406 INFO neutron.agent.dhcp.agent [None req-bc6b1eb8-05a8-4f81-ae5f-9a326e5386e7 - - - - - -] DHCP configuration for ports {'654fad6f-8756-4ac9-a018-496dc04a2b32'} is completed#033[00m Dec 2 05:13:13 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 1 addresses Dec 2 05:13:13 localhost podman[331765]: 2025-12-02 10:13:13.57450044 +0000 UTC m=+0.056943438 container kill bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:13:13 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:13 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:13.619 2 INFO neutron.agent.securitygroups_rpc [None req-c543e5d9-b160-4e67-9d20-7dd20a3ffb2c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:13.835 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:12Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=654fad6f-8756-4ac9-a018-496dc04a2b32, ip_allocation=immediate, mac_address=fa:16:3e:2d:d1:84, name=tempest-PortsTestJSON-1309543666, network_id=e625cddc-8a19-4455-8def-acda09527180, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3069, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:13Z on network e625cddc-8a19-4455-8def-acda09527180#033[00m Dec 2 05:13:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:13.879 263406 INFO neutron.agent.dhcp.agent [None req-670de3b0-bd62-43f8-a9f6-31e7892e9771 - - - - - -] DHCP configuration for ports {'654fad6f-8756-4ac9-a018-496dc04a2b32'} is completed#033[00m Dec 2 05:13:14 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 2 addresses Dec 2 05:13:14 localhost podman[331801]: 2025-12-02 10:13:14.085124448 +0000 UTC m=+0.059413654 container kill bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:13:14 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:14 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:14 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:14.261 2 INFO neutron.agent.securitygroups_rpc [None req-654147e9-be5d-4ad5-89df-552977a60280 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:14.348 263406 INFO neutron.agent.dhcp.agent [None req-77d491f8-c234-4e8a-9514-b620aa0a1e84 - - - - - -] DHCP configuration for ports {'654fad6f-8756-4ac9-a018-496dc04a2b32'} is completed#033[00m Dec 2 05:13:14 localhost dnsmasq[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 0 addresses Dec 2 05:13:14 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:14 localhost dnsmasq-dhcp[331709]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:14 localhost podman[331838]: 2025-12-02 10:13:14.552489813 +0000 UTC m=+0.060725539 container kill bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:13:14 localhost dnsmasq[331709]: exiting on receipt of SIGTERM Dec 2 05:13:14 localhost podman[331874]: 2025-12-02 10:13:14.957714322 +0000 UTC m=+0.065884046 container kill bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:14 localhost systemd[1]: libpod-bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a.scope: Deactivated successfully. Dec 2 05:13:15 localhost podman[331889]: 2025-12-02 10:13:15.034826167 +0000 UTC m=+0.055733956 container died bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:13:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:15 localhost systemd[1]: var-lib-containers-storage-overlay-0f931dc05886cc5f2c688b7d673d13227eb02357be8cf59904bb71f31ec32857-merged.mount: Deactivated successfully. Dec 2 05:13:15 localhost podman[331889]: 2025-12-02 10:13:15.08147889 +0000 UTC m=+0.102386649 container remove bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:13:15 localhost systemd[1]: libpod-conmon-bf7040d577fb66045f0c0dca66748515c48bbc8ed4eece4b41cff07243affd5a.scope: Deactivated successfully. Dec 2 05:13:15 localhost ovn_controller[154505]: 2025-12-02T10:13:15Z|00584|binding|INFO|Removing iface tapdf392c2e-28 ovn-installed in OVS Dec 2 05:13:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:15.133 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 72691af4-7c63-4983-b608-e1485f951d79 with type ""#033[00m Dec 2 05:13:15 localhost ovn_controller[154505]: 2025-12-02T10:13:15Z|00585|binding|INFO|Removing lport df392c2e-2825-4eb5-b487-3c55b6dc044b ovn-installed in OVS Dec 2 05:13:15 localhost nova_compute[281854]: 2025-12-02 10:13:15.135 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:15.136 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39bb33c3-24c9-42a5-b452-f2a8901739e7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=df392c2e-2825-4eb5-b487-3c55b6dc044b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:15.139 160221 INFO neutron.agent.ovn.metadata.agent [-] Port df392c2e-2825-4eb5-b487-3c55b6dc044b in datapath e625cddc-8a19-4455-8def-acda09527180 unbound from our chassis#033[00m Dec 2 05:13:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:15.142 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e625cddc-8a19-4455-8def-acda09527180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:15 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:15.143 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[fc155eb4-5680-4e33-9442-4252c9141458]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:15 localhost nova_compute[281854]: 2025-12-02 10:13:15.146 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:15 localhost ovn_controller[154505]: 2025-12-02T10:13:15Z|00586|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:15 localhost nova_compute[281854]: 2025-12-02 10:13:15.868 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:15 localhost podman[331965]: Dec 2 05:13:15 localhost podman[331965]: 2025-12-02 10:13:15.92765538 +0000 UTC m=+0.055771257 container create 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:15 localhost systemd[1]: Started libpod-conmon-3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5.scope. Dec 2 05:13:15 localhost systemd[1]: Started libcrun container. Dec 2 05:13:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625ef60836c560783979780add8e80057a2796c370ea3ab250428921cca8ada9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:15 localhost podman[331965]: 2025-12-02 10:13:15.988867641 +0000 UTC m=+0.116983518 container init 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:13:15 localhost podman[331965]: 2025-12-02 10:13:15.999212617 +0000 UTC m=+0.127328494 container start 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:13:15 localhost podman[331965]: 2025-12-02 10:13:15.899478179 +0000 UTC m=+0.027594036 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:16 localhost dnsmasq[331983]: started, version 2.85 cachesize 150 Dec 2 05:13:16 localhost dnsmasq[331983]: DNS service limited to local subnets Dec 2 05:13:16 localhost dnsmasq[331983]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:16 localhost dnsmasq[331983]: warning: no upstream servers configured Dec 2 05:13:16 localhost dnsmasq-dhcp[331983]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:16 localhost dnsmasq[331983]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/addn_hosts - 0 addresses Dec 2 05:13:16 localhost dnsmasq-dhcp[331983]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/host Dec 2 05:13:16 localhost dnsmasq-dhcp[331983]: read /var/lib/neutron/dhcp/e625cddc-8a19-4455-8def-acda09527180/opts Dec 2 05:13:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:16.080 263406 INFO neutron.agent.dhcp.agent [None req-dbda9556-274d-4654-bf47-fc38e9c2313b - - - - - -] DHCP configuration for ports {'f6a41283-fc6f-4680-bb13-d9b38f4d32ad', 'df392c2e-2825-4eb5-b487-3c55b6dc044b'} is completed#033[00m Dec 2 05:13:16 localhost dnsmasq[331983]: exiting on receipt of SIGTERM Dec 2 05:13:16 localhost podman[332002]: 2025-12-02 10:13:16.234737433 +0000 UTC m=+0.061760186 container kill 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:13:16 localhost systemd[1]: libpod-3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5.scope: Deactivated successfully. Dec 2 05:13:16 localhost nova_compute[281854]: 2025-12-02 10:13:16.246 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:16 localhost podman[332015]: 2025-12-02 10:13:16.305351005 +0000 UTC m=+0.055568042 container died 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:16 localhost podman[332015]: 2025-12-02 10:13:16.389528848 +0000 UTC m=+0.139745835 container cleanup 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:16 localhost systemd[1]: libpod-conmon-3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5.scope: Deactivated successfully. Dec 2 05:13:16 localhost podman[332018]: 2025-12-02 10:13:16.414951925 +0000 UTC m=+0.153256674 container remove 3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e625cddc-8a19-4455-8def-acda09527180, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:13:16 localhost kernel: device tapdf392c2e-28 left promiscuous mode Dec 2 05:13:16 localhost nova_compute[281854]: 2025-12-02 10:13:16.428 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:16 localhost nova_compute[281854]: 2025-12-02 10:13:16.444 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:16.462 263406 INFO neutron.agent.dhcp.agent [None req-cfca3a08-db63-400e-83fb-cc90c7e0831c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:16 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:16.463 263406 INFO neutron.agent.dhcp.agent [None req-cfca3a08-db63-400e-83fb-cc90c7e0831c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:16 localhost systemd[1]: var-lib-containers-storage-overlay-625ef60836c560783979780add8e80057a2796c370ea3ab250428921cca8ada9-merged.mount: Deactivated successfully. Dec 2 05:13:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d49c60f43ea561f511990ffb0cb1a2cb09d10ffc7bc467484479f6f604dfcf5-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:16 localhost systemd[1]: run-netns-qdhcp\x2de625cddc\x2d8a19\x2d4455\x2d8def\x2dacda09527180.mount: Deactivated successfully. Dec 2 05:13:17 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:17.666 263406 INFO neutron.agent.linux.ip_lib [None req-cf859642-9de9-4e75-b0d5-72fce650fb9a - - - - - -] Device tapba727844-8d cannot be used as it has no MAC address#033[00m Dec 2 05:13:17 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e218 e218: 6 total, 6 up, 6 in Dec 2 05:13:17 localhost nova_compute[281854]: 2025-12-02 10:13:17.698 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:17 localhost kernel: device tapba727844-8d entered promiscuous mode Dec 2 05:13:17 localhost NetworkManager[5965]: [1764670397.7065] manager: (tapba727844-8d): new Generic device (/org/freedesktop/NetworkManager/Devices/91) Dec 2 05:13:17 localhost systemd-udevd[332056]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:13:17 localhost ovn_controller[154505]: 2025-12-02T10:13:17Z|00587|binding|INFO|Claiming lport ba727844-8d29-4d76-8d07-d01b3d69d95e for this chassis. Dec 2 05:13:17 localhost ovn_controller[154505]: 2025-12-02T10:13:17Z|00588|binding|INFO|ba727844-8d29-4d76-8d07-d01b3d69d95e: Claiming unknown Dec 2 05:13:17 localhost nova_compute[281854]: 2025-12-02 10:13:17.713 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:17.720 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c967ca81-437f-42d1-b838-c1e08520a79f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c967ca81-437f-42d1-b838-c1e08520a79f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07449851-3999-4148-9dbf-2264021c09b5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba727844-8d29-4d76-8d07-d01b3d69d95e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:17.722 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ba727844-8d29-4d76-8d07-d01b3d69d95e in datapath c967ca81-437f-42d1-b838-c1e08520a79f bound to our chassis#033[00m Dec 2 05:13:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:17.723 160221 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c967ca81-437f-42d1-b838-c1e08520a79f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 2 05:13:17 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:17.728 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5f85aa78-cd2a-4030-947d-0d31a4c3f78b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:17 localhost ovn_controller[154505]: 2025-12-02T10:13:17Z|00589|binding|INFO|Setting lport ba727844-8d29-4d76-8d07-d01b3d69d95e ovn-installed in OVS Dec 2 05:13:17 localhost ovn_controller[154505]: 2025-12-02T10:13:17Z|00590|binding|INFO|Setting lport ba727844-8d29-4d76-8d07-d01b3d69d95e up in Southbound Dec 2 05:13:17 localhost nova_compute[281854]: 2025-12-02 10:13:17.760 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:17 localhost nova_compute[281854]: 2025-12-02 10:13:17.803 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:17 localhost nova_compute[281854]: 2025-12-02 10:13:17.867 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e219 e219: 6 total, 6 up, 6 in Dec 2 05:13:18 localhost podman[332112]: Dec 2 05:13:18 localhost podman[332112]: 2025-12-02 10:13:18.766394949 +0000 UTC m=+0.103253553 container create c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:13:18 localhost podman[332112]: 2025-12-02 10:13:18.714267629 +0000 UTC m=+0.051126283 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:18 localhost systemd[1]: Started libpod-conmon-c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51.scope. Dec 2 05:13:18 localhost systemd[1]: tmp-crun.buaD2s.mount: Deactivated successfully. Dec 2 05:13:18 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:18.830 2 INFO neutron.agent.securitygroups_rpc [None req-809c0752-7390-4fe2-a50b-599cd97feccb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:18 localhost systemd[1]: Started libcrun container. Dec 2 05:13:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf9c65a212f9b7deb6b686dc4b60ab931b38012c56a5e34dd2439979a7e36ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:18 localhost podman[332112]: 2025-12-02 10:13:18.851284321 +0000 UTC m=+0.188142915 container init c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:13:18 localhost podman[332112]: 2025-12-02 10:13:18.860410514 +0000 UTC m=+0.197269088 container start c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:13:18 localhost dnsmasq[332131]: started, version 2.85 cachesize 150 Dec 2 05:13:18 localhost dnsmasq[332131]: DNS service limited to local subnets Dec 2 05:13:18 localhost dnsmasq[332131]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:18 localhost dnsmasq[332131]: warning: no upstream servers configured Dec 2 05:13:18 localhost dnsmasq-dhcp[332131]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:18 localhost dnsmasq[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/addn_hosts - 0 addresses Dec 2 05:13:18 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/host Dec 2 05:13:18 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/opts Dec 2 05:13:18 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:18.919 263406 INFO neutron.agent.dhcp.agent [None req-1958b7f1-9d37-44f8-badf-cb00c8acdb2a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:18Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fd39b39a-7655-4402-8611-0df2d49b294f, ip_allocation=immediate, mac_address=fa:16:3e:ed:f7:2b, name=tempest-PortsTestJSON-735674509, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:13:16Z, description=, dns_domain=, id=c967ca81-437f-42d1-b838-c1e08520a79f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1850680089, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59962, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3070, status=ACTIVE, subnets=['9d0b22dc-23e6-4d68-9fb6-15b61b72d853'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:16Z, vlan_transparent=None, network_id=c967ca81-437f-42d1-b838-c1e08520a79f, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3095, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:18Z on network c967ca81-437f-42d1-b838-c1e08520a79f#033[00m Dec 2 05:13:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:19.117 263406 INFO neutron.agent.dhcp.agent [None req-272d30d7-7c8f-4cc2-a11d-1c4bb34de297 - - - - - -] DHCP configuration for ports {'2269273e-0b74-4448-be38-0cbafd2abcb7'} is completed#033[00m Dec 2 05:13:19 localhost dnsmasq[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/addn_hosts - 1 addresses Dec 2 05:13:19 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/host Dec 2 05:13:19 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/opts Dec 2 05:13:19 localhost podman[332148]: 2025-12-02 10:13:19.138980138 +0000 UTC m=+0.067267344 container kill c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:13:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:19.374 263406 INFO neutron.agent.dhcp.agent [None req-9135939d-b8ad-4ffa-b8f7-8fc5da451d46 - - - - - -] DHCP configuration for ports {'fd39b39a-7655-4402-8611-0df2d49b294f'} is completed#033[00m Dec 2 05:13:19 localhost ovn_controller[154505]: 2025-12-02T10:13:19Z|00591|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:19 localhost nova_compute[281854]: 2025-12-02 10:13:19.569 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:19 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:19.817 2 INFO neutron.agent.securitygroups_rpc [None req-86a7997e-3c35-4f55-af22-9588e0545ddf 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:19 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:19.887 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:19Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=272e8361-349d-4315-9e1b-553eba6bd9cc, ip_allocation=immediate, mac_address=fa:16:3e:f4:76:53, name=tempest-PortsTestJSON-1039621780, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:13:16Z, description=, dns_domain=, id=c967ca81-437f-42d1-b838-c1e08520a79f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1850680089, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59962, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3070, status=ACTIVE, subnets=['9d0b22dc-23e6-4d68-9fb6-15b61b72d853'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:16Z, vlan_transparent=None, network_id=c967ca81-437f-42d1-b838-c1e08520a79f, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3101, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:19Z on network c967ca81-437f-42d1-b838-c1e08520a79f#033[00m Dec 2 05:13:20 localhost dnsmasq[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/addn_hosts - 2 addresses Dec 2 05:13:20 localhost podman[332187]: 2025-12-02 10:13:20.102233758 +0000 UTC m=+0.062686403 container kill c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 2 05:13:20 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/host Dec 2 05:13:20 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/opts Dec 2 05:13:20 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:20.322 263406 INFO neutron.agent.dhcp.agent [None req-2aaa37c5-8f93-44d5-bdf1-7dcd356558b7 - - - - - -] DHCP configuration for ports {'272e8361-349d-4315-9e1b-553eba6bd9cc'} is completed#033[00m Dec 2 05:13:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:20.546 2 INFO neutron.agent.securitygroups_rpc [None req-173c0738-50d3-48cc-af5a-b78421c8e23c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:20 localhost dnsmasq[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/addn_hosts - 1 addresses Dec 2 05:13:20 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/host Dec 2 05:13:20 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/opts Dec 2 05:13:20 localhost podman[332226]: 2025-12-02 10:13:20.780079481 +0000 UTC m=+0.060037601 container kill c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:13:20 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:20.979 2 INFO neutron.agent.securitygroups_rpc [None req-a615b87d-8e60-4d08-9004-5c448f1ed91b 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:21 localhost dnsmasq[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/addn_hosts - 0 addresses Dec 2 05:13:21 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/host Dec 2 05:13:21 localhost podman[332263]: 2025-12-02 10:13:21.217230031 +0000 UTC m=+0.062261271 container kill c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 2 05:13:21 localhost dnsmasq-dhcp[332131]: read /var/lib/neutron/dhcp/c967ca81-437f-42d1-b838-c1e08520a79f/opts Dec 2 05:13:21 localhost nova_compute[281854]: 2025-12-02 10:13:21.248 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:21 localhost dnsmasq[332131]: exiting on receipt of SIGTERM Dec 2 05:13:21 localhost podman[332300]: 2025-12-02 10:13:21.595256525 +0000 UTC m=+0.053322122 container kill c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:13:21 localhost systemd[1]: libpod-c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51.scope: Deactivated successfully. Dec 2 05:13:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:21.637 160221 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d78a3c66-6a9f-41e0-bcd0-7bef2817af9a with type ""#033[00m Dec 2 05:13:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:21.638 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-c967ca81-437f-42d1-b838-c1e08520a79f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c967ca81-437f-42d1-b838-c1e08520a79f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07449851-3999-4148-9dbf-2264021c09b5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ba727844-8d29-4d76-8d07-d01b3d69d95e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:21 localhost ovn_controller[154505]: 2025-12-02T10:13:21Z|00592|binding|INFO|Removing iface tapba727844-8d ovn-installed in OVS Dec 2 05:13:21 localhost ovn_controller[154505]: 2025-12-02T10:13:21Z|00593|binding|INFO|Removing lport ba727844-8d29-4d76-8d07-d01b3d69d95e ovn-installed in OVS Dec 2 05:13:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:21.639 160221 INFO neutron.agent.ovn.metadata.agent [-] Port ba727844-8d29-4d76-8d07-d01b3d69d95e in datapath c967ca81-437f-42d1-b838-c1e08520a79f unbound from our chassis#033[00m Dec 2 05:13:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:21.639 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c967ca81-437f-42d1-b838-c1e08520a79f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:21 localhost nova_compute[281854]: 2025-12-02 10:13:21.639 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:21.640 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[4989e8a7-fead-43cb-a425-2476da464b1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:21 localhost nova_compute[281854]: 2025-12-02 10:13:21.646 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost podman[332314]: 2025-12-02 10:13:21.662492927 +0000 UTC m=+0.057033251 container died c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:13:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e220 e220: 6 total, 6 up, 6 in Dec 2 05:13:21 localhost podman[332314]: 2025-12-02 10:13:21.745710294 +0000 UTC m=+0.140250618 container cleanup c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:21 localhost systemd[1]: libpod-conmon-c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51.scope: Deactivated successfully. Dec 2 05:13:21 localhost podman[332316]: 2025-12-02 10:13:21.773784892 +0000 UTC m=+0.157326243 container remove c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c967ca81-437f-42d1-b838-c1e08520a79f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:21 localhost systemd[1]: var-lib-containers-storage-overlay-dcf9c65a212f9b7deb6b686dc4b60ab931b38012c56a5e34dd2439979a7e36ba-merged.mount: Deactivated successfully. Dec 2 05:13:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4f75e9268a17a140d407cb663578b693ce2951f17b063fab36a2633787ecf51-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:21 localhost kernel: device tapba727844-8d left promiscuous mode Dec 2 05:13:21 localhost nova_compute[281854]: 2025-12-02 10:13:21.838 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost nova_compute[281854]: 2025-12-02 10:13:21.858 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:21.873 263406 INFO neutron.agent.dhcp.agent [None req-ee495048-54fe-4af6-8d59-80e9582e2b69 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:21 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:21.873 263406 INFO neutron.agent.dhcp.agent [None req-ee495048-54fe-4af6-8d59-80e9582e2b69 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:21 localhost systemd[1]: run-netns-qdhcp\x2dc967ca81\x2d437f\x2d42d1\x2db838\x2dc1e08520a79f.mount: Deactivated successfully. Dec 2 05:13:22 localhost ovn_controller[154505]: 2025-12-02T10:13:22Z|00594|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:22 localhost nova_compute[281854]: 2025-12-02 10:13:22.028 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:23 localhost nova_compute[281854]: 2025-12-02 10:13:23.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:24 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:24.100 2 INFO neutron.agent.securitygroups_rpc [None req-1c76d5fd-a4fe-47e2-aa2d-3afed3d7786f 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:13:24 localhost podman[332343]: 2025-12-02 10:13:24.442827561 +0000 UTC m=+0.086226669 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 2 05:13:24 localhost podman[332343]: 2025-12-02 10:13:24.457076001 +0000 UTC m=+0.100475099 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 05:13:24 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:13:26 localhost nova_compute[281854]: 2025-12-02 10:13:26.252 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e221 e221: 6 total, 6 up, 6 in Dec 2 05:13:26 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:26.677 2 INFO neutron.agent.securitygroups_rpc [None req-03a5d6f5-e9fc-4da2-b77f-f6e56b5ab3f7 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:13:28 localhost podman[332362]: 2025-12-02 10:13:28.437748861 +0000 UTC m=+0.082267843 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 2 05:13:28 localhost podman[332362]: 2025-12-02 10:13:28.471947553 +0000 UTC m=+0.116466565 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:13:28 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:13:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:13:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:13:29 localhost podman[332379]: 2025-12-02 10:13:29.432933242 +0000 UTC m=+0.069045200 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:13:29 localhost podman[332379]: 2025-12-02 10:13:29.446011921 +0000 UTC m=+0.082123929 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:13:29 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:13:29 localhost podman[332378]: 2025-12-02 10:13:29.504790537 +0000 UTC m=+0.142330544 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Dec 2 05:13:29 localhost podman[332378]: 2025-12-02 10:13:29.521944614 +0000 UTC m=+0.159484591 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.) Dec 2 05:13:29 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:13:29 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:29.824 263406 INFO neutron.agent.linux.ip_lib [None req-c25b3346-c026-494e-8d13-bc2f4aad8a67 - - - - - -] Device tap280223a7-c0 cannot be used as it has no MAC address#033[00m Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.846 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost kernel: device tap280223a7-c0 entered promiscuous mode Dec 2 05:13:29 localhost NetworkManager[5965]: [1764670409.8538] manager: (tap280223a7-c0): new Generic device (/org/freedesktop/NetworkManager/Devices/92) Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.854 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost ovn_controller[154505]: 2025-12-02T10:13:29Z|00595|binding|INFO|Claiming lport 280223a7-c06f-4632-bc9d-10fcc2daed96 for this chassis. Dec 2 05:13:29 localhost ovn_controller[154505]: 2025-12-02T10:13:29Z|00596|binding|INFO|280223a7-c06f-4632-bc9d-10fcc2daed96: Claiming unknown Dec 2 05:13:29 localhost systemd-udevd[332431]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.882 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:29.886 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=280223a7-c06f-4632-bc9d-10fcc2daed96) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:29.887 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 280223a7-c06f-4632-bc9d-10fcc2daed96 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 bound to our chassis#033[00m Dec 2 05:13:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:29.889 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port f89c0c39-9ee7-4005-bfd3-e3f0da74be0e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:13:29 localhost ovn_controller[154505]: 2025-12-02T10:13:29Z|00597|binding|INFO|Setting lport 280223a7-c06f-4632-bc9d-10fcc2daed96 ovn-installed in OVS Dec 2 05:13:29 localhost ovn_controller[154505]: 2025-12-02T10:13:29Z|00598|binding|INFO|Setting lport 280223a7-c06f-4632-bc9d-10fcc2daed96 up in Southbound Dec 2 05:13:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:29.890 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:29 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:29.891 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e0b1a3-e909-4de1-a056-5991a83bfeeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.892 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.923 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:29 localhost nova_compute[281854]: 2025-12-02 10:13:29.944 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:30 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:30.049 2 INFO neutron.agent.securitygroups_rpc [None req-fd3a2c94-7b8a-4692-92f4-6e53ae5bb9ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1']#033[00m Dec 2 05:13:30 localhost podman[332486]: Dec 2 05:13:30 localhost podman[332486]: 2025-12-02 10:13:30.737733624 +0000 UTC m=+0.075706399 container create b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:30 localhost systemd[1]: Started libpod-conmon-b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e.scope. Dec 2 05:13:30 localhost systemd[1]: tmp-crun.0trQqR.mount: Deactivated successfully. Dec 2 05:13:30 localhost podman[332486]: 2025-12-02 10:13:30.6989656 +0000 UTC m=+0.036938425 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:30 localhost systemd[1]: Started libcrun container. Dec 2 05:13:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a4833b9e177edc4a21650603f08c75cabbd06af814a1afc50e1b9f8ede988e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:30 localhost podman[332486]: 2025-12-02 10:13:30.838337185 +0000 UTC m=+0.176309960 container init b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:13:30 localhost podman[332486]: 2025-12-02 10:13:30.847067607 +0000 UTC m=+0.185040352 container start b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:13:30 localhost dnsmasq[332504]: started, version 2.85 cachesize 150 Dec 2 05:13:30 localhost dnsmasq[332504]: DNS service limited to local subnets Dec 2 05:13:30 localhost dnsmasq[332504]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:30 localhost dnsmasq[332504]: warning: no upstream servers configured Dec 2 05:13:30 localhost dnsmasq-dhcp[332504]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:30 localhost dnsmasq[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 0 addresses Dec 2 05:13:30 localhost dnsmasq-dhcp[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:30 localhost dnsmasq-dhcp[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:30 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:30.905 263406 INFO neutron.agent.dhcp.agent [None req-4de2bde8-a057-4821-aedb-98767a9144b7 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7eea047d-a0d7-4840-b24f-b6baca53023b, ip_allocation=immediate, mac_address=fa:16:3e:8e:df:50, name=tempest-PortsTestJSON-989628863, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:43Z, description=, dns_domain=, id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1562010144, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17376, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2951, status=ACTIVE, subnets=['bed8ca05-97c3-4b62-a76b-f0a2af362b59'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:28Z, vlan_transparent=None, network_id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1'], standard_attr_id=3165, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:29Z on network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9#033[00m Dec 2 05:13:31 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:31.121 263406 INFO neutron.agent.dhcp.agent [None req-5884a31d-6066-4875-bde8-c00fddef00bb - - - - - -] DHCP configuration for ports {'0bb31a40-2592-41bf-9cb8-279241b602e7', '3d63b6f3-67ae-4c21-b56a-394abd9240e9'} is completed#033[00m Dec 2 05:13:31 localhost dnsmasq[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:31 localhost dnsmasq-dhcp[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:31 localhost podman[332522]: 2025-12-02 10:13:31.141285228 +0000 UTC m=+0.061869870 container kill b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:13:31 localhost dnsmasq-dhcp[332504]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:31 localhost nova_compute[281854]: 2025-12-02 10:13:31.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:31 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:31.393 263406 INFO neutron.agent.dhcp.agent [None req-74dd683f-b432-4e84-a1c3-c4b295f95150 - - - - - -] DHCP configuration for ports {'7eea047d-a0d7-4840-b24f-b6baca53023b'} is completed#033[00m Dec 2 05:13:31 localhost dnsmasq[332504]: exiting on receipt of SIGTERM Dec 2 05:13:31 localhost podman[332560]: 2025-12-02 10:13:31.57513166 +0000 UTC m=+0.056285111 container kill b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 2 05:13:31 localhost systemd[1]: libpod-b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e.scope: Deactivated successfully. Dec 2 05:13:31 localhost podman[332573]: 2025-12-02 10:13:31.643878952 +0000 UTC m=+0.053738634 container died b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:13:31 localhost podman[332573]: 2025-12-02 10:13:31.681433642 +0000 UTC m=+0.091293284 container cleanup b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 2 05:13:31 localhost systemd[1]: libpod-conmon-b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e.scope: Deactivated successfully. Dec 2 05:13:31 localhost podman[332575]: 2025-12-02 10:13:31.728807705 +0000 UTC m=+0.131372032 container remove b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:13:31 localhost systemd[1]: var-lib-containers-storage-overlay-6a4833b9e177edc4a21650603f08c75cabbd06af814a1afc50e1b9f8ede988e6-merged.mount: Deactivated successfully. Dec 2 05:13:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b46aacbe8c988ebee757b846df8517fff1784c836821a8f42046f67dcbe0790e-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:32.442 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:32.445 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:32.447 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port f89c0c39-9ee7-4005-bfd3-e3f0da74be0e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:32.447 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:32 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:32.448 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0a709f-08dd-4a5c-bf5f-29ac9a62f81a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:33.040 2 INFO neutron.agent.securitygroups_rpc [None req-2f13aa5e-1f09-4188-af19-a84ba5538b10 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1', '19b93206-6bbf-441b-abe9-609f462663ba']#033[00m Dec 2 05:13:33 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:33.506 2 INFO neutron.agent.securitygroups_rpc [None req-af664360-5183-46be-8a67-9553906db0ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['19b93206-6bbf-441b-abe9-609f462663ba']#033[00m Dec 2 05:13:33 localhost podman[332738]: Dec 2 05:13:33 localhost podman[332738]: 2025-12-02 10:13:33.78836676 +0000 UTC m=+0.095921417 container create ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:13:33 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:13:33 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:13:33 localhost systemd[1]: Started libpod-conmon-ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12.scope. Dec 2 05:13:33 localhost podman[332738]: 2025-12-02 10:13:33.740232147 +0000 UTC m=+0.047786854 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:33 localhost systemd[1]: Started libcrun container. Dec 2 05:13:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9646cfbbdb3d2f70d14a5e7560c806dd20cd6e2eebbd709fdb4223591f816bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:33 localhost podman[332738]: 2025-12-02 10:13:33.87729192 +0000 UTC m=+0.184846587 container init ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:13:33 localhost podman[332738]: 2025-12-02 10:13:33.888906809 +0000 UTC m=+0.196461466 container start ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:13:33 localhost dnsmasq[332757]: started, version 2.85 cachesize 150 Dec 2 05:13:33 localhost dnsmasq[332757]: DNS service limited to local subnets Dec 2 05:13:33 localhost dnsmasq[332757]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:33 localhost dnsmasq[332757]: warning: no upstream servers configured Dec 2 05:13:33 localhost dnsmasq-dhcp[332757]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:33 localhost dnsmasq-dhcp[332757]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 2 05:13:33 localhost dnsmasq[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:33 localhost dnsmasq-dhcp[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:33 localhost dnsmasq-dhcp[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:33.952 263406 INFO neutron.agent.dhcp.agent [None req-8af431b2-be97-49e2-82ee-1b399287cde1 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7eea047d-a0d7-4840-b24f-b6baca53023b, ip_allocation=immediate, mac_address=fa:16:3e:8e:df:50, name=tempest-PortsTestJSON-109939691, network_id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['19b93206-6bbf-441b-abe9-609f462663ba'], standard_attr_id=3165, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:32Z on network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9#033[00m Dec 2 05:13:33 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:33.956 263406 INFO oslo.privsep.daemon [None req-8af431b2-be97-49e2-82ee-1b399287cde1 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpyw9umdxh/privsep.sock']#033[00m Dec 2 05:13:34 localhost openstack_network_exporter[242845]: ERROR 10:13:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:13:34 localhost openstack_network_exporter[242845]: ERROR 10:13:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:34 localhost openstack_network_exporter[242845]: ERROR 10:13:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:13:34 localhost openstack_network_exporter[242845]: ERROR 10:13:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:13:34 localhost openstack_network_exporter[242845]: Dec 2 05:13:34 localhost openstack_network_exporter[242845]: ERROR 10:13:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:13:34 localhost openstack_network_exporter[242845]: Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.248 263406 INFO neutron.agent.dhcp.agent [None req-49cadb5d-eea7-4b04-8444-c047171737f6 - - - - - -] DHCP configuration for ports {'0bb31a40-2592-41bf-9cb8-279241b602e7', '7eea047d-a0d7-4840-b24f-b6baca53023b', '280223a7-c06f-4632-bc9d-10fcc2daed96', '3d63b6f3-67ae-4c21-b56a-394abd9240e9'} is completed#033[00m Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.619 263406 INFO oslo.privsep.daemon [None req-8af431b2-be97-49e2-82ee-1b399287cde1 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.507 332762 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.522 332762 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.525 332762 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 2 05:13:34 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:34.526 332762 INFO oslo.privsep.daemon [-] privsep daemon running as pid 332762#033[00m Dec 2 05:13:34 localhost dnsmasq-dhcp[332757]: DHCPRELEASE(tap280223a7-c0) 10.100.0.7 fa:16:3e:8e:df:50 Dec 2 05:13:35 localhost dnsmasq[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:35 localhost dnsmasq-dhcp[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:35 localhost dnsmasq-dhcp[332757]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:35 localhost podman[332782]: 2025-12-02 10:13:35.448750758 +0000 UTC m=+0.048395300 container kill ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:13:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:35.534 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:35.535 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:13:35 localhost nova_compute[281854]: 2025-12-02 10:13:35.566 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:35 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:35.688 263406 INFO neutron.agent.dhcp.agent [None req-df178d8d-7293-4451-af7b-6c191eee803c - - - - - -] DHCP configuration for ports {'7eea047d-a0d7-4840-b24f-b6baca53023b'} is completed#033[00m Dec 2 05:13:35 localhost dnsmasq[332757]: exiting on receipt of SIGTERM Dec 2 05:13:35 localhost podman[332820]: 2025-12-02 10:13:35.906712432 +0000 UTC m=+0.064253124 container kill ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:13:35 localhost systemd[1]: libpod-ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12.scope: Deactivated successfully. Dec 2 05:13:35 localhost podman[332834]: 2025-12-02 10:13:35.973744398 +0000 UTC m=+0.052059078 container died ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:13:36 localhost podman[332834]: 2025-12-02 10:13:36.003376308 +0000 UTC m=+0.081690958 container cleanup ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:13:36 localhost systemd[1]: libpod-conmon-ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12.scope: Deactivated successfully. Dec 2 05:13:36 localhost podman[240799]: time="2025-12-02T10:13:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:13:36 localhost podman[332836]: 2025-12-02 10:13:36.095263457 +0000 UTC m=+0.168222774 container remove ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 2 05:13:36 localhost podman[240799]: @ - - [02/Dec/2025:10:13:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156205 "" "Go-http-client/1.1" Dec 2 05:13:36 localhost podman[240799]: @ - - [02/Dec/2025:10:13:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18780 "" "Go-http-client/1.1" Dec 2 05:13:36 localhost nova_compute[281854]: 2025-12-02 10:13:36.258 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:36 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:36.627 2 INFO neutron.agent.securitygroups_rpc [None req-3559f9f7-1434-4371-a7c8-42d18644b0ee 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72']#033[00m Dec 2 05:13:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:13:36 localhost systemd[1]: var-lib-containers-storage-overlay-e9646cfbbdb3d2f70d14a5e7560c806dd20cd6e2eebbd709fdb4223591f816bf-merged.mount: Deactivated successfully. Dec 2 05:13:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea8089f72d080e873f522e605c144f5550ca1906c2b4bff2df7ae548ba9bbd12-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:36 localhost podman[332876]: 2025-12-02 10:13:36.928741808 +0000 UTC m=+0.073501679 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:13:36 localhost podman[332876]: 2025-12-02 10:13:36.966194587 +0000 UTC m=+0.110954458 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:36 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:13:37 localhost podman[332926]: Dec 2 05:13:37 localhost podman[332926]: 2025-12-02 10:13:37.583372584 +0000 UTC m=+0.091100479 container create 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:13:37 localhost systemd[1]: Started libpod-conmon-60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1.scope. Dec 2 05:13:37 localhost podman[332926]: 2025-12-02 10:13:37.53894529 +0000 UTC m=+0.046672795 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:37 localhost systemd[1]: Started libcrun container. Dec 2 05:13:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a16d668f099b0e7a88d67bc8bce0bf7943325785a7ad60e1335816d6a91e163/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:37 localhost podman[332926]: 2025-12-02 10:13:37.665444391 +0000 UTC m=+0.173171846 container init 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:37 localhost podman[332926]: 2025-12-02 10:13:37.674353708 +0000 UTC m=+0.182081163 container start 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:13:37 localhost dnsmasq[332944]: started, version 2.85 cachesize 150 Dec 2 05:13:37 localhost dnsmasq[332944]: DNS service limited to local subnets Dec 2 05:13:37 localhost dnsmasq[332944]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:37 localhost dnsmasq[332944]: warning: no upstream servers configured Dec 2 05:13:37 localhost dnsmasq-dhcp[332944]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 2 05:13:37 localhost dnsmasq-dhcp[332944]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:37 localhost dnsmasq[332944]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 0 addresses Dec 2 05:13:37 localhost dnsmasq-dhcp[332944]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:37 localhost dnsmasq-dhcp[332944]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:13:37 localhost systemd[1]: tmp-crun.Ybkp9e.mount: Deactivated successfully. Dec 2 05:13:37 localhost dnsmasq[332944]: exiting on receipt of SIGTERM Dec 2 05:13:37 localhost podman[332962]: 2025-12-02 10:13:37.954850023 +0000 UTC m=+0.053459926 container kill 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:13:37 localhost systemd[1]: libpod-60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1.scope: Deactivated successfully. Dec 2 05:13:37 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:37.955 263406 INFO neutron.agent.dhcp.agent [None req-36147f73-a4d6-4a96-9776-ffd898695cfe - - - - - -] DHCP configuration for ports {'0bb31a40-2592-41bf-9cb8-279241b602e7', '3d63b6f3-67ae-4c21-b56a-394abd9240e9', '280223a7-c06f-4632-bc9d-10fcc2daed96'} is completed#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:37.971 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:37.973 160221 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:37.975 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port f89c0c39-9ee7-4005-bfd3-e3f0da74be0e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:37.975 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:37 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:37.976 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[62c6f910-12e6-43c7-aab1-99c7ed5d3a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:38 localhost podman[332976]: 2025-12-02 10:13:38.013875547 +0000 UTC m=+0.045921666 container died 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:38 localhost systemd[1]: var-lib-containers-storage-overlay-5a16d668f099b0e7a88d67bc8bce0bf7943325785a7ad60e1335816d6a91e163-merged.mount: Deactivated successfully. Dec 2 05:13:38 localhost podman[332976]: 2025-12-02 10:13:38.059326668 +0000 UTC m=+0.091372717 container remove 60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:38 localhost systemd[1]: libpod-conmon-60ea29a264b98775429dd3977b442fb9d364ef8d722794c5eb5f26cf16d7c2e1.scope: Deactivated successfully. Dec 2 05:13:39 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:39.413 2 INFO neutron.agent.securitygroups_rpc [None req-f1fb19ca-fe9a-41d0-b171-21af469f3b04 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72', '4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']#033[00m Dec 2 05:13:39 localhost podman[333051]: Dec 2 05:13:39 localhost podman[333051]: 2025-12-02 10:13:39.471243194 +0000 UTC m=+0.089445165 container create 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 2 05:13:39 localhost systemd[1]: Started libpod-conmon-6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80.scope. Dec 2 05:13:39 localhost systemd[1]: tmp-crun.wBbCPU.mount: Deactivated successfully. Dec 2 05:13:39 localhost podman[333051]: 2025-12-02 10:13:39.43206907 +0000 UTC m=+0.050271061 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:13:39 localhost systemd[1]: Started libcrun container. Dec 2 05:13:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205472f177633ee42eceac8b70b6938bd41041da927576b34353c1bcb670607c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:13:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:39.536 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:13:39 localhost podman[333051]: 2025-12-02 10:13:39.550437104 +0000 UTC m=+0.168639065 container init 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:13:39 localhost podman[333051]: 2025-12-02 10:13:39.559238539 +0000 UTC m=+0.177440500 container start 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 2 05:13:39 localhost dnsmasq[333069]: started, version 2.85 cachesize 150 Dec 2 05:13:39 localhost dnsmasq[333069]: DNS service limited to local subnets Dec 2 05:13:39 localhost dnsmasq[333069]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:13:39 localhost dnsmasq[333069]: warning: no upstream servers configured Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: DHCP, static leases only on 10.100.0.32, lease time 1d Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: DHCP, static leases only on 10.100.0.16, lease time 1d Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:13:39 localhost dnsmasq[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:39.622 263406 INFO neutron.agent.dhcp.agent [None req-3bcf75ff-0076-42e8-8fd1-340a2fbc9e9c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0518379a-4019-4d66-afc5-66421b387adf, ip_allocation=immediate, mac_address=fa:16:3e:55:a8:d5, name=tempest-PortsTestJSON-39700250, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:43Z, description=, dns_domain=, id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-1562010144, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17376, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2951, status=ACTIVE, subnets=['39177969-64aa-42f2-9d18-c7fc4745ec4f', '9c059950-606f-4465-be07-113be9f2db02'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:34Z, vlan_transparent=None, network_id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5ce035be-6b85-468c-9f45-e514c3373f72'], standard_attr_id=3212, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:36Z on network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9#033[00m Dec 2 05:13:39 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:39.827 263406 INFO neutron.agent.dhcp.agent [None req-618cae67-ddcb-4ae1-89fc-fc2ee67fe0f9 - - - - - -] DHCP configuration for ports {'0bb31a40-2592-41bf-9cb8-279241b602e7', '0518379a-4019-4d66-afc5-66421b387adf', '280223a7-c06f-4632-bc9d-10fcc2daed96', '3d63b6f3-67ae-4c21-b56a-394abd9240e9'} is completed#033[00m Dec 2 05:13:39 localhost podman[333087]: 2025-12-02 10:13:39.898831638 +0000 UTC m=+0.053123747 container kill 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:13:39 localhost dnsmasq[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:39 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:39 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:39.968 2 INFO neutron.agent.securitygroups_rpc [None req-1b71adfa-49cf-47f5-a7a4-715d1b19b4b9 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']#033[00m Dec 2 05:13:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:40.083 263406 INFO neutron.agent.dhcp.agent [None req-bc572c6a-cfbb-4c30-94ec-401915257b51 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:36Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0518379a-4019-4d66-afc5-66421b387adf, ip_allocation=immediate, mac_address=fa:16:3e:55:a8:d5, name=tempest-PortsTestJSON-721231550, network_id=55499ea7-fec3-45ce-8fdc-4c408cd7abf9, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445'], standard_attr_id=3212, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:39Z on network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9#033[00m Dec 2 05:13:40 localhost dnsmasq-dhcp[333069]: DHCPRELEASE(tap280223a7-c0) 10.100.0.13 fa:16:3e:55:a8:d5 Dec 2 05:13:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:40.145 263406 INFO neutron.agent.dhcp.agent [None req-e234a8c3-06f7-4479-bf1f-7434af4be574 - - - - - -] DHCP configuration for ports {'0518379a-4019-4d66-afc5-66421b387adf'} is completed#033[00m Dec 2 05:13:40 localhost podman[333124]: 2025-12-02 10:13:40.672045603 +0000 UTC m=+0.051580015 container kill 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:13:40 localhost dnsmasq[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 1 addresses Dec 2 05:13:40 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:40 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:40.877 263406 INFO neutron.agent.dhcp.agent [None req-89a5a5e3-8fbd-40c3-9f34-67258da961a1 - - - - - -] DHCP configuration for ports {'0518379a-4019-4d66-afc5-66421b387adf'} is completed#033[00m Dec 2 05:13:41 localhost podman[333161]: 2025-12-02 10:13:41.077984891 +0000 UTC m=+0.063025570 container kill 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:13:41 localhost dnsmasq[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/addn_hosts - 0 addresses Dec 2 05:13:41 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/host Dec 2 05:13:41 localhost dnsmasq-dhcp[333069]: read /var/lib/neutron/dhcp/55499ea7-fec3-45ce-8fdc-4c408cd7abf9/opts Dec 2 05:13:41 localhost nova_compute[281854]: 2025-12-02 10:13:41.260 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:41 localhost nova_compute[281854]: 2025-12-02 10:13:41.265 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:13:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:13:41 localhost podman[333182]: 2025-12-02 10:13:41.508997387 +0000 UTC m=+0.149519555 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:13:41 localhost podman[333182]: 2025-12-02 10:13:41.5451256 +0000 UTC m=+0.185647708 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:13:41 localhost podman[333232]: 2025-12-02 10:13:41.547426951 +0000 UTC m=+0.055550221 container kill 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:13:41 localhost dnsmasq[333069]: exiting on receipt of SIGTERM Dec 2 05:13:41 localhost systemd[1]: libpod-6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80.scope: Deactivated successfully. Dec 2 05:13:41 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:13:41 localhost podman[333183]: 2025-12-02 10:13:41.454800393 +0000 UTC m=+0.092794474 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:13:41 localhost podman[333261]: 2025-12-02 10:13:41.594171817 +0000 UTC m=+0.032152768 container died 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:13:41 localhost podman[333183]: 2025-12-02 10:13:41.646931082 +0000 UTC m=+0.284925123 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 2 05:13:41 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:13:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80-userdata-shm.mount: Deactivated successfully. Dec 2 05:13:41 localhost podman[333261]: 2025-12-02 10:13:41.692567019 +0000 UTC m=+0.130548000 container remove 6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-55499ea7-fec3-45ce-8fdc-4c408cd7abf9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 2 05:13:41 localhost systemd[1]: libpod-conmon-6950087791e588658dbd4cfc3951061644dcb2052cbefefd7c292e3334632f80.scope: Deactivated successfully. Dec 2 05:13:41 localhost nova_compute[281854]: 2025-12-02 10:13:41.925 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:41 localhost ovn_controller[154505]: 2025-12-02T10:13:41Z|00599|binding|INFO|Releasing lport 280223a7-c06f-4632-bc9d-10fcc2daed96 from this chassis (sb_readonly=0) Dec 2 05:13:41 localhost kernel: device tap280223a7-c0 left promiscuous mode Dec 2 05:13:41 localhost ovn_controller[154505]: 2025-12-02T10:13:41Z|00600|binding|INFO|Setting lport 280223a7-c06f-4632-bc9d-10fcc2daed96 down in Southbound Dec 2 05:13:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:41.934 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=280223a7-c06f-4632-bc9d-10fcc2daed96) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:13:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:41.936 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 280223a7-c06f-4632-bc9d-10fcc2daed96 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 unbound from our chassis#033[00m Dec 2 05:13:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:41.939 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:13:41 localhost ovn_metadata_agent[160216]: 2025-12-02 10:13:41.940 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[5b061811-e138-4386-9843-30a30dfd054d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:13:41 localhost nova_compute[281854]: 2025-12-02 10:13:41.947 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:41 localhost nova_compute[281854]: 2025-12-02 10:13:41.949 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:42.027 263406 INFO neutron.agent.dhcp.agent [None req-a0302869-e0c8-4bf3-b050-286ac267d1c1 - - - - - -] DHCP configuration for ports {'0bb31a40-2592-41bf-9cb8-279241b602e7', '3d63b6f3-67ae-4c21-b56a-394abd9240e9', '280223a7-c06f-4632-bc9d-10fcc2daed96'} is completed#033[00m Dec 2 05:13:42 localhost neutron_sriov_agent[256494]: 2025-12-02 10:13:42.059 2 INFO neutron.agent.securitygroups_rpc [None req-0f8f5643-0b03-43e9-aad8-6bac530a8f71 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']#033[00m Dec 2 05:13:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:42.186 263406 INFO neutron.agent.dhcp.agent [None req-c65976d6-4cad-4f96-b417-9742b58e99de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:42.187 263406 INFO neutron.agent.dhcp.agent [None req-c65976d6-4cad-4f96-b417-9742b58e99de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:42 localhost systemd[1]: var-lib-containers-storage-overlay-205472f177633ee42eceac8b70b6938bd41041da927576b34353c1bcb670607c-merged.mount: Deactivated successfully. Dec 2 05:13:42 localhost systemd[1]: run-netns-qdhcp\x2d55499ea7\x2dfec3\x2d45ce\x2d8fdc\x2d4c408cd7abf9.mount: Deactivated successfully. Dec 2 05:13:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:13:42.689 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:13:42 localhost ovn_controller[154505]: 2025-12-02T10:13:42Z|00601|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:13:42 localhost nova_compute[281854]: 2025-12-02 10:13:42.892 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:46 localhost nova_compute[281854]: 2025-12-02 10:13:46.265 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:47 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e222 e222: 6 total, 6 up, 6 in Dec 2 05:13:48 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e223 e223: 6 total, 6 up, 6 in Dec 2 05:13:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:50 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4272430941' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:50 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4272430941' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:51 localhost nova_compute[281854]: 2025-12-02 10:13:51.266 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e224 e224: 6 total, 6 up, 6 in Dec 2 05:13:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:13:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2476172005' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:13:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:13:54 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2476172005' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:13:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:13:55 localhost podman[333291]: 2025-12-02 10:13:55.453578675 +0000 UTC m=+0.090394510 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:13:55 localhost podman[333291]: 2025-12-02 10:13:55.465084841 +0000 UTC m=+0.101900706 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 2 05:13:55 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:13:56 localhost nova_compute[281854]: 2025-12-02 10:13:56.269 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:13:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:13:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e225 e225: 6 total, 6 up, 6 in Dec 2 05:13:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e226 e226: 6 total, 6 up, 6 in Dec 2 05:13:57 localhost nova_compute[281854]: 2025-12-02 10:13:57.840 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:13:58 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e227 e227: 6 total, 6 up, 6 in Dec 2 05:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:13:59 localhost systemd[1]: tmp-crun.UEUqKd.mount: Deactivated successfully. Dec 2 05:13:59 localhost podman[333311]: 2025-12-02 10:13:59.459165529 +0000 UTC m=+0.096944605 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:13:59 localhost podman[333311]: 2025-12-02 10:13:59.467982824 +0000 UTC m=+0.105761950 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 2 05:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:13:59 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:13:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:13:59 localhost podman[333329]: 2025-12-02 10:13:59.601886773 +0000 UTC m=+0.108117323 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:13:59 localhost podman[333329]: 2025-12-02 10:13:59.615001051 +0000 UTC m=+0.121231581 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:13:59 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:13:59 localhost podman[333348]: 2025-12-02 10:13:59.70688424 +0000 UTC m=+0.093755469 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6) Dec 2 05:13:59 localhost podman[333348]: 2025-12-02 10:13:59.723060742 +0000 UTC m=+0.109931961 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 05:13:59 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:14:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:00 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2655722515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:00 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2655722515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:00 localhost systemd[1]: tmp-crun.yNWv0y.mount: Deactivated successfully. Dec 2 05:14:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:00 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1778989957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:00 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:00 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1778989957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.271 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.275 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.275 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.276 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.276 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:14:01 localhost nova_compute[281854]: 2025-12-02 10:14:01.279 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1151473184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:02 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1151473184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:02 localhost nova_compute[281854]: 2025-12-02 10:14:02.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:02 localhost nova_compute[281854]: 2025-12-02 10:14:02.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:14:02 localhost nova_compute[281854]: 2025-12-02 10:14:02.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:14:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:03.057 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:03.057 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:03.058 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.302 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.303 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.303 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.303 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:14:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e228 e228: 6 total, 6 up, 6 in Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.951 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.967 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.967 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.968 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.968 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.969 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.990 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.991 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.991 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.991 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:14:03 localhost nova_compute[281854]: 2025-12-02 10:14:03.992 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:14:04 localhost openstack_network_exporter[242845]: ERROR 10:14:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:14:04 localhost openstack_network_exporter[242845]: ERROR 10:14:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:04 localhost openstack_network_exporter[242845]: ERROR 10:14:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:04 localhost openstack_network_exporter[242845]: ERROR 10:14:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:14:04 localhost openstack_network_exporter[242845]: Dec 2 05:14:04 localhost openstack_network_exporter[242845]: ERROR 10:14:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:14:04 localhost openstack_network_exporter[242845]: Dec 2 05:14:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:14:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2984015493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.418 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.477 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.478 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.685 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.686 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11120MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.686 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.687 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.749 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.750 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.750 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.764 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.963 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.964 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:14:04 localhost nova_compute[281854]: 2025-12-02 10:14:04.991 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.014 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.055 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:14:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:14:05 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3165726944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.489 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.495 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.510 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.512 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:14:05 localhost nova_compute[281854]: 2025-12-02 10:14:05.513 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:14:06 localhost podman[240799]: time="2025-12-02T10:14:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:14:06 localhost podman[240799]: @ - - [02/Dec/2025:10:14:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:14:06 localhost podman[240799]: @ - - [02/Dec/2025:10:14:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18788 "" "Go-http-client/1.1" Dec 2 05:14:06 localhost nova_compute[281854]: 2025-12-02 10:14:06.275 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e229 e229: 6 total, 6 up, 6 in Dec 2 05:14:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:14:07 localhost systemd[1]: tmp-crun.HjbQ6E.mount: Deactivated successfully. Dec 2 05:14:07 localhost podman[333417]: 2025-12-02 10:14:07.443921215 +0000 UTC m=+0.085924082 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:14:07 localhost podman[333417]: 2025-12-02 10:14:07.456955702 +0000 UTC m=+0.098958589 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 2 05:14:07 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:14:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e230 e230: 6 total, 6 up, 6 in Dec 2 05:14:08 localhost nova_compute[281854]: 2025-12-02 10:14:08.373 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:08 localhost nova_compute[281854]: 2025-12-02 10:14:08.373 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:08 localhost nova_compute[281854]: 2025-12-02 10:14:08.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:09 localhost nova_compute[281854]: 2025-12-02 10:14:09.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:09 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e231 e231: 6 total, 6 up, 6 in Dec 2 05:14:10 localhost nova_compute[281854]: 2025-12-02 10:14:10.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:11 localhost nova_compute[281854]: 2025-12-02 10:14:11.278 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:11 localhost nova_compute[281854]: 2025-12-02 10:14:11.282 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/232663665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:11 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/232663665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:11 localhost nova_compute[281854]: 2025-12-02 10:14:11.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:14:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:14:12 localhost podman[333437]: 2025-12-02 10:14:12.460396929 +0000 UTC m=+0.090558855 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller) Dec 2 05:14:12 localhost podman[333436]: 2025-12-02 10:14:12.435110595 +0000 UTC m=+0.071384004 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:14:12 localhost podman[333436]: 2025-12-02 10:14:12.521113476 +0000 UTC m=+0.157386845 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:14:12 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:14:12 localhost podman[333437]: 2025-12-02 10:14:12.538199252 +0000 UTC m=+0.168361168 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 2 05:14:12 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:14:12 localhost ovn_controller[154505]: 2025-12-02T10:14:12Z|00602|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.107 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.107 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.132 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.132 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f37bb8e3-8239-48bc-8433-55630bd11d1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.107540', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a856bf2c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '9723340741946c8fcd7f107655aa97b7d737f672cd88005eb2c14d8c035a6660'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.107540', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a856c99a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '0c7e9cf85351109bc035e3fc4fd8ec48e93335d90e68beef9a5289032e539c70'}]}, 'timestamp': '2025-12-02 10:14:16.132757', '_unique_id': 'c6c9ca5766b842cc9333b7abe5012d8d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.134 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.137 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '989d654f-bb9d-46cc-9b20-128f0a2df808', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.134395', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a8578d6c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'f251a00ba28dad66b8feba7f0d322eafa5ecb9499aefc2f3891cec7fad4c587a'}]}, 'timestamp': '2025-12-02 10:14:16.137796', '_unique_id': '009aa6ecdca14817a3d5d172e8468624'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.138 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.148 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.148 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4725d00d-5aef-4cb1-b59d-a4b703003917', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.138880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a859366c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': '70b2e85cb4e52d8b845406e3a6d583a77d51b661de8b37710da826b6b3873bc1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.138880', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a8593fae-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': '12db747c1e75361d4c856d52cc4bb2a6d32bd44671593ee431668f3feb365a9e'}]}, 'timestamp': '2025-12-02 10:14:16.148875', '_unique_id': 'e2c80c2b4dcc48ffa22a00b3aac6b3ed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.149 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd465bc5-36a2-4850-877c-4b82da95462e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.149982', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85972da-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '73d0af520d76c7a6a71f6b7a011ae26cf128ccde90a9f0a9b82239a8428fb49b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.149982', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a8597a3c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '9ebadf4d50b7f64492a557bc428646e54f37e4841a3b07ab31119a5083125beb'}]}, 'timestamp': '2025-12-02 10:14:16.150371', '_unique_id': '07459e9e70c647708f0126dfe2582e9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.150 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.151 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.151 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.151 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.151 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.151 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd97bbe44-ce3d-4d3d-af31-59c09ef95b12', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.151510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a859aeee-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': 'f40c58d2d140af5e20680b5673f90724e5bd79dafbc959af212b7605baf31d90'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.151510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a859b63c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': '2ba74f3c5315a0b560caa0653a48364882e37acb7e3b4920a120f41ca3664fb3'}]}, 'timestamp': '2025-12-02 10:14:16.151906', '_unique_id': 'c792a07e43b34ef3ab8a6efbdbef44a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.152 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.167 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 19640000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e887434d-3d7a-4262-a4c5-10f8a2d7798b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 19640000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:14:16.152899', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'a85c1dfa-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.386399996, 'message_signature': '42d0a87754bed12bd62444f99575fd12f7b371d83f269cdddb592b89e887faca'}]}, 'timestamp': '2025-12-02 10:14:16.167711', '_unique_id': '64969483ca794319a7d624e61473c254'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.168 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cc5a2a7a-f05e-4e87-8b49-af72bdc84d93', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.168827', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85c5306-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '78f88254559be46262428dbcbd5a5ff000337803a8bc22ea37c4b1435b20544f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.168827', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a85c5bf8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': 'a765744e2a04a86265c31ead38e242efc87039f8cb67ed0a9594bebbc79a5646'}]}, 'timestamp': '2025-12-02 10:14:16.169259', '_unique_id': '5e6809f62b68445fb77b7beec736f7ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.169 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ad86f35-1bbb-461a-a117-31c77fa74224', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:14:16.170259', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'a85c8ad8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.386399996, 'message_signature': 'a48dab2d0643e0f7afd5cefeb892f23253a0379ae9b76f927bbb0606819c9af8'}]}, 'timestamp': '2025-12-02 10:14:16.170462', '_unique_id': 'c9daf1d9fd154f3eac787ca49254a74a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.170 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.171 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.171 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.171 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a95014c-3a7e-44ae-97cd-16a97a13b290', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.171441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85cb8f0-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '44fab6215b23fb9a5e2792bb7678016ddb9c89d95f66340451d177fc83e024ee'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.171441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a85cc0de-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': 'bb0525c0cb07109ff8b28ba6555e6301585c3883a5d6474138dd20b19d47af63'}]}, 'timestamp': '2025-12-02 10:14:16.171838', '_unique_id': '97448fb6fa964ca6bedd4964d3a359f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.172 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.173 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.173 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a2da6ead-ed29-4be6-bc0c-3fda0dc0bf99', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.173009', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85cf806-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': 'dbfe0fa0e50122da96e91beefcc6be662052e432d48e7bd8b44976d37407155e'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.173009', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a85d01ca-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.357920107, 'message_signature': '8676bb19213844229cbd3f843214b317e2a4dc2e5d98a98ca1982990b9b07718'}]}, 'timestamp': '2025-12-02 10:14:16.173538', '_unique_id': 'fbce09f86e47423480d92fc057fde8c3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.174 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aa91af7b-7f8d-43b4-98b2-4d13897fb273', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.174706', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85d38a2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'f5ea2312f206ad5a2f1ad312777b49b63e6d633a2244f6febf0bd620239d5f03'}]}, 'timestamp': '2025-12-02 10:14:16.174918', '_unique_id': '3165e548de4a4631b7dde255c364b770'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.175 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '950c3472-2fe4-4af5-a8e5-c3c7e64447e6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.175867', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85d65ca-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '33ab1073d47798292ec163f3d30d1746b3cd4d3edeb1061e7186bdf53f7b0749'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.175867', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a85d6cd2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': '19104fdeb9739b16a814863bb5ef0525c38940914072e8625f63f524b62b3325'}]}, 'timestamp': '2025-12-02 10:14:16.176240', '_unique_id': '8dd580276b884d6781eaa0b5173d4799'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.176 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ea47b1b6-b889-4ec9-b579-ed43291bf564', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.177230', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85d9b30-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': '77a6c23262067e2c1d150a6814df116c20b1df7f43bf9ff797dbd889b2ab503e'}]}, 'timestamp': '2025-12-02 10:14:16.177441', '_unique_id': '821800e62082406d801e97f522ea753f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.177 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.178 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.178 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'faaea1a2-7975-455a-8d1b-bf56c7c09ec8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.178382', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85dc81c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'a2d5a64b38c6e528b41e74da7390ee7558dff62d1a5333b442853e81425d679c'}]}, 'timestamp': '2025-12-02 10:14:16.178590', '_unique_id': '2da665abab774a6eb89a25421bfb4c86'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.179 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9fcd09ec-9615-4b08-963b-3d23405d277d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.179595', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85df81e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': '32be730c6244800a895ae62a5f7584371ea3ca7079d0130999d2c451f88ee54c'}]}, 'timestamp': '2025-12-02 10:14:16.179821', '_unique_id': '3929b4f0225e4ee3a11b7f1c0e4bee45'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.180 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ed51565-9b5e-4281-a2f6-f9be2f9a16ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.180821', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85e276c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': '4d7d734603458d99e59f7b1bdb0215465721b1fa6b0e045cd7796315bedc3c80'}]}, 'timestamp': '2025-12-02 10:14:16.181050', '_unique_id': 'c8acf6c0bc94418e8d7b754bff21d542'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.181 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'af01bc8e-0198-474f-9a6b-c2743b743211', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.182121', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85e5a16-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'c973c4175094f64d5cb60c7758222109271b72faedc70103cdcd7f9e424df75e'}]}, 'timestamp': '2025-12-02 10:14:16.182327', '_unique_id': '5f39666e9d004bb4be36d4f88a04d2fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.182 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a1a52e21-c373-42fc-8be4-0595dcfd2c2a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.183261', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85e86b2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'd2188805f1fb9f3644eb46163e9e4999b5acd850d9521544b6f6dcaf687790ac'}]}, 'timestamp': '2025-12-02 10:14:16.183470', '_unique_id': '95806585c4a345be9b0467ff876128bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.183 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.184 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.184 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d3f55b7-65d9-4c5d-8137-fc72c0377061', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.184455', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85eb556-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': '3d55957b5b1094c2afee9ba410be9ef37c5b1ef278047f6e31bc88b65eecddba'}]}, 'timestamp': '2025-12-02 10:14:16.184679', '_unique_id': '27a9cc3d07e64fce9aa87def267355d3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.185 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a104d50-a4ef-44e2-b438-b6ce8e561405', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:14:16.185766', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'a85ee88c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.353446467, 'message_signature': 'aa6d5c30ce56c930e74466a8e65789b0032bd13979754868276700bf5e5bcdd0'}]}, 'timestamp': '2025-12-02 10:14:16.185975', '_unique_id': '5fa9cf410ad544678be2cc1d8b09d04e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.186 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.187 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.187 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2b9aa1db-a9c1-469d-b214-2390272a39c5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:14:16.187209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'a85f2108-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': 'd0e041c6e14079f37278356163876cd8cda6b63b86d285a14a0c519ee805be44'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:14:16.187209', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'a85f2810-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12618.326583072, 'message_signature': 'c4252962afd0894d87ef3e0d182a4dbfcd14f00d20ba38019a9bef4641bfc3be'}]}, 'timestamp': '2025-12-02 10:14:16.187586', '_unique_id': 'f876ee59c48646a6a877f225712a52cd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:14:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:14:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:14:16 localhost nova_compute[281854]: 2025-12-02 10:14:16.281 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e232 e232: 6 total, 6 up, 6 in Dec 2 05:14:17 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:17 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:18 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch Dec 2 05:14:18 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch Dec 2 05:14:18 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch Dec 2 05:14:18 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"}]': finished Dec 2 05:14:18 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e233 e233: 6 total, 6 up, 6 in Dec 2 05:14:19 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e234 e234: 6 total, 6 up, 6 in Dec 2 05:14:20 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e235 e235: 6 total, 6 up, 6 in Dec 2 05:14:21 localhost nova_compute[281854]: 2025-12-02 10:14:21.284 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e236 e236: 6 total, 6 up, 6 in Dec 2 05:14:25 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e237 e237: 6 total, 6 up, 6 in Dec 2 05:14:26 localhost nova_compute[281854]: 2025-12-02 10:14:26.288 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:14:26 localhost podman[333484]: 2025-12-02 10:14:26.447386396 +0000 UTC m=+0.090490703 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0) Dec 2 05:14:26 localhost podman[333484]: 2025-12-02 10:14:26.458296537 +0000 UTC m=+0.101400824 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 2 05:14:26 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:14:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e238 e238: 6 total, 6 up, 6 in Dec 2 05:14:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e239 e239: 6 total, 6 up, 6 in Dec 2 05:14:29 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e240 e240: 6 total, 6 up, 6 in Dec 2 05:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:14:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:14:30 localhost podman[333505]: 2025-12-02 10:14:30.459566709 +0000 UTC m=+0.087109282 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:14:30 localhost podman[333505]: 2025-12-02 10:14:30.470209813 +0000 UTC m=+0.097752416 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 2 05:14:30 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:14:30 localhost podman[333504]: 2025-12-02 10:14:30.530480589 +0000 UTC m=+0.160736644 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Dec 2 05:14:30 localhost podman[333504]: 2025-12-02 10:14:30.565070591 +0000 UTC m=+0.195326696 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 2 05:14:30 localhost systemd[1]: tmp-crun.2YUcE9.mount: Deactivated successfully. Dec 2 05:14:30 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:14:30 localhost podman[333506]: 2025-12-02 10:14:30.58568597 +0000 UTC m=+0.206563396 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:14:30 localhost podman[333506]: 2025-12-02 10:14:30.599217671 +0000 UTC m=+0.220095107 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:14:30 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:14:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:31 localhost nova_compute[281854]: 2025-12-02 10:14:31.289 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e241 e241: 6 total, 6 up, 6 in Dec 2 05:14:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:32 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3513386139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:32 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3513386139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:33 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e242 e242: 6 total, 6 up, 6 in Dec 2 05:14:34 localhost openstack_network_exporter[242845]: ERROR 10:14:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:34 localhost openstack_network_exporter[242845]: ERROR 10:14:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:14:34 localhost openstack_network_exporter[242845]: ERROR 10:14:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:14:34 localhost openstack_network_exporter[242845]: ERROR 10:14:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:14:34 localhost openstack_network_exporter[242845]: Dec 2 05:14:34 localhost openstack_network_exporter[242845]: ERROR 10:14:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:14:34 localhost openstack_network_exporter[242845]: Dec 2 05:14:34 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:14:34 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3236740117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:14:34 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:14:34 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3236740117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:14:34 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:14:34 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:14:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:35.767 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:14:35 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:35.768 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:14:35 localhost nova_compute[281854]: 2025-12-02 10:14:35.817 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:35 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e243 e243: 6 total, 6 up, 6 in Dec 2 05:14:36 localhost podman[240799]: time="2025-12-02T10:14:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:14:36 localhost podman[240799]: @ - - [02/Dec/2025:10:14:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:14:36 localhost podman[240799]: @ - - [02/Dec/2025:10:14:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18786 "" "Go-http-client/1.1" Dec 2 05:14:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:36 localhost nova_compute[281854]: 2025-12-02 10:14:36.295 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e244 e244: 6 total, 6 up, 6 in Dec 2 05:14:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:14:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:14:38 localhost podman[333651]: 2025-12-02 10:14:38.460548019 +0000 UTC m=+0.089744153 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:14:38 localhost podman[333651]: 2025-12-02 10:14:38.471887721 +0000 UTC m=+0.101083815 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:14:38 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:14:39 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:39.770 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:14:40 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:40 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:40 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:40.662 263406 INFO neutron.agent.linux.ip_lib [None req-6ca1148e-65b8-484e-8841-ece0613bc433 - - - - - -] Device tapf119cdef-09 cannot be used as it has no MAC address#033[00m Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.686 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:40 localhost kernel: device tapf119cdef-09 entered promiscuous mode Dec 2 05:14:40 localhost NetworkManager[5965]: [1764670480.6962] manager: (tapf119cdef-09): new Generic device (/org/freedesktop/NetworkManager/Devices/93) Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.698 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:40 localhost ovn_controller[154505]: 2025-12-02T10:14:40Z|00603|binding|INFO|Claiming lport f119cdef-0974-4d2c-8acd-8d7464640ca9 for this chassis. Dec 2 05:14:40 localhost ovn_controller[154505]: 2025-12-02T10:14:40Z|00604|binding|INFO|f119cdef-0974-4d2c-8acd-8d7464640ca9: Claiming unknown Dec 2 05:14:40 localhost systemd-udevd[333681]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:14:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:40.709 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f119cdef-0974-4d2c-8acd-8d7464640ca9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:14:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:40.711 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f119cdef-0974-4d2c-8acd-8d7464640ca9 in datapath 8703a229-8c49-443e-95c6-aff62a358434 bound to our chassis#033[00m Dec 2 05:14:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:40.713 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port eba8a1ff-9260-4962-baad-7ee950876ce0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:14:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:40.714 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8703a229-8c49-443e-95c6-aff62a358434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:14:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:14:40.715 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[8e567293-9308-4275-af4a-f12ae6530c35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost ovn_controller[154505]: 2025-12-02T10:14:40Z|00605|binding|INFO|Setting lport f119cdef-0974-4d2c-8acd-8d7464640ca9 ovn-installed in OVS Dec 2 05:14:40 localhost ovn_controller[154505]: 2025-12-02T10:14:40Z|00606|binding|INFO|Setting lport f119cdef-0974-4d2c-8acd-8d7464640ca9 up in Southbound Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.730 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.732 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost journal[230136]: ethtool ioctl error on tapf119cdef-09: No such device Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.777 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:40 localhost nova_compute[281854]: 2025-12-02 10:14:40.810 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:41 localhost nova_compute[281854]: 2025-12-02 10:14:41.295 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost nova_compute[281854]: 2025-12-02 10:14:41.299 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost nova_compute[281854]: 2025-12-02 10:14:41.571 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:41 localhost podman[333752]: Dec 2 05:14:41 localhost podman[333752]: 2025-12-02 10:14:41.678072623 +0000 UTC m=+0.077612459 container create 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:14:41 localhost systemd[1]: Started libpod-conmon-069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5.scope. Dec 2 05:14:41 localhost systemd[1]: tmp-crun.FXGEDv.mount: Deactivated successfully. Dec 2 05:14:41 localhost podman[333752]: 2025-12-02 10:14:41.630251678 +0000 UTC m=+0.029791574 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:14:41 localhost systemd[1]: Started libcrun container. Dec 2 05:14:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a0ee3792bce35a0d1020085525a61984484526151f073e1f67a4a8079d9209d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:14:41 localhost podman[333752]: 2025-12-02 10:14:41.762782551 +0000 UTC m=+0.162322417 container init 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:14:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e245 e245: 6 total, 6 up, 6 in Dec 2 05:14:41 localhost podman[333752]: 2025-12-02 10:14:41.768600506 +0000 UTC m=+0.168140382 container start 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:14:41 localhost dnsmasq[333770]: started, version 2.85 cachesize 150 Dec 2 05:14:41 localhost dnsmasq[333770]: DNS service limited to local subnets Dec 2 05:14:41 localhost dnsmasq[333770]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:14:41 localhost dnsmasq[333770]: warning: no upstream servers configured Dec 2 05:14:41 localhost dnsmasq-dhcp[333770]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:14:41 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 0 addresses Dec 2 05:14:41 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:14:41 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:14:41 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:41.921 263406 INFO neutron.agent.dhcp.agent [None req-8ac12d5b-b1aa-4eb4-818a-48834904d48d - - - - - -] DHCP configuration for ports {'37cd0238-9054-48a1-8d6c-4a73284b3493'} is completed#033[00m Dec 2 05:14:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:42.554 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:14:42Z, description=, device_id=3692a4cb-56a0-4a89-90aa-c2a2654d3e13, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7d96b16f-bbce-4bbd-b3cf-c85a927f8c04, ip_allocation=immediate, mac_address=fa:16:3e:6d:09:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:14:38Z, description=, dns_domain=, id=8703a229-8c49-443e-95c6-aff62a358434, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1306125232-network, port_security_enabled=True, project_id=d858413a9b01463f96545916d2abe5ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=770, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3448, status=ACTIVE, subnets=['9d626c62-851c-4a11-822f-bd4dadd5e8b1'], tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:14:38Z, vlan_transparent=None, network_id=8703a229-8c49-443e-95c6-aff62a358434, port_security_enabled=False, project_id=d858413a9b01463f96545916d2abe5ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3478, status=DOWN, tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:14:42Z on network 8703a229-8c49-443e-95c6-aff62a358434#033[00m Dec 2 05:14:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:14:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:14:42 localhost podman[333772]: 2025-12-02 10:14:42.683635719 +0000 UTC m=+0.068001553 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Dec 2 05:14:42 localhost systemd[1]: tmp-crun.BBE2DB.mount: Deactivated successfully. Dec 2 05:14:42 localhost podman[333817]: 2025-12-02 10:14:42.753247014 +0000 UTC m=+0.056210038 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 2 05:14:42 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 1 addresses Dec 2 05:14:42 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:14:42 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:14:42 localhost podman[333772]: 2025-12-02 10:14:42.775044786 +0000 UTC m=+0.159410650 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:14:42 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:14:42 localhost podman[333771]: 2025-12-02 10:14:42.81084427 +0000 UTC m=+0.195987774 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:14:42 localhost podman[333771]: 2025-12-02 10:14:42.821959476 +0000 UTC m=+0.207102970 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:14:42 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:14:42 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:42.979 263406 INFO neutron.agent.dhcp.agent [None req-5ae50839-ca1e-474b-a69d-832c8f395340 - - - - - -] DHCP configuration for ports {'7d96b16f-bbce-4bbd-b3cf-c85a927f8c04'} is completed#033[00m Dec 2 05:14:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:44.054 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:14:42Z, description=, device_id=3692a4cb-56a0-4a89-90aa-c2a2654d3e13, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7d96b16f-bbce-4bbd-b3cf-c85a927f8c04, ip_allocation=immediate, mac_address=fa:16:3e:6d:09:0a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:14:38Z, description=, dns_domain=, id=8703a229-8c49-443e-95c6-aff62a358434, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1306125232-network, port_security_enabled=True, project_id=d858413a9b01463f96545916d2abe5ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=770, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3448, status=ACTIVE, subnets=['9d626c62-851c-4a11-822f-bd4dadd5e8b1'], tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:14:38Z, vlan_transparent=None, network_id=8703a229-8c49-443e-95c6-aff62a358434, port_security_enabled=False, project_id=d858413a9b01463f96545916d2abe5ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3478, status=DOWN, tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:14:42Z on network 8703a229-8c49-443e-95c6-aff62a358434#033[00m Dec 2 05:14:44 localhost podman[333874]: 2025-12-02 10:14:44.266535222 +0000 UTC m=+0.060425321 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:14:44 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 1 addresses Dec 2 05:14:44 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:14:44 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:14:44 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:14:44.526 263406 INFO neutron.agent.dhcp.agent [None req-63405262-5246-4eec-a3b7-b293c01031be - - - - - -] DHCP configuration for ports {'7d96b16f-bbce-4bbd-b3cf-c85a927f8c04'} is completed#033[00m Dec 2 05:14:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:46 localhost nova_compute[281854]: 2025-12-02 10:14:46.300 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:46 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 2 05:14:46 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 2 05:14:46 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 2 05:14:46 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.146092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490146169, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2301, "num_deletes": 265, "total_data_size": 3072457, "memory_usage": 3119864, "flush_reason": "Manual Compaction"} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490160564, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 1997238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32012, "largest_seqno": 34308, "table_properties": {"data_size": 1987914, "index_size": 5705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22490, "raw_average_key_size": 22, "raw_value_size": 1968271, "raw_average_value_size": 1948, "num_data_blocks": 246, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670381, "oldest_key_time": 1764670381, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14558 microseconds, and 6376 cpu microseconds. Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.160659) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 1997238 bytes OK Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.160683) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.163466) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.163490) EVENT_LOG_v1 {"time_micros": 1764670490163483, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.163513) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3061557, prev total WAL file size 3061557, number of live WAL files 2. Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.164550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1950KB)], [54(17MB)] Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490164643, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 20451874, "oldest_snapshot_seqno": -1} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 13998 keys, 18903857 bytes, temperature: kUnknown Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490269047, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 18903857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18821342, "index_size": 46446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 374706, "raw_average_key_size": 26, "raw_value_size": 18580746, "raw_average_value_size": 1327, "num_data_blocks": 1750, "num_entries": 13998, "num_filter_entries": 13998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.269460) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 18903857 bytes Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.7 rd, 180.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 17.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(19.7) write-amplify(9.5) OK, records in: 14541, records dropped: 543 output_compression: NoCompression Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271389) EVENT_LOG_v1 {"time_micros": 1764670490271376, "job": 32, "event": "compaction_finished", "compaction_time_micros": 104516, "compaction_time_cpu_micros": 52126, "output_level": 6, "num_output_files": 1, "total_output_size": 18903857, "num_input_records": 14541, "num_output_records": 13998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490272095, "job": 32, "event": "table_file_deletion", "file_number": 56} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490275033, "job": 32, "event": "table_file_deletion", "file_number": 54} Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.164444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.275120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.275126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.275129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.275132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:14:50.275135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:14:50 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:14:50 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:14:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:51 localhost nova_compute[281854]: 2025-12-02 10:14:51.304 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:51 localhost nova_compute[281854]: 2025-12-02 10:14:51.308 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:53 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:14:53 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2237002208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:14:53 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e246 e246: 6 total, 6 up, 6 in Dec 2 05:14:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 2 05:14:53 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 2 05:14:53 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 2 05:14:54 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Dec 2 05:14:54 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e247 e247: 6 total, 6 up, 6 in Dec 2 05:14:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:14:56 localhost nova_compute[281854]: 2025-12-02 10:14:56.307 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:14:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:14:57 localhost podman[333895]: 2025-12-02 10:14:57.444601735 +0000 UTC m=+0.084895614 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:14:57 localhost podman[333895]: 2025-12-02 10:14:57.487275322 +0000 UTC m=+0.127569111 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:14:57 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:14:57 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 2 05:14:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 2 05:14:58 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 2 05:14:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Dec 2 05:14:59 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e248 e248: 6 total, 6 up, 6 in Dec 2 05:14:59 localhost nova_compute[281854]: 2025-12-02 10:14:59.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e249 e249: 6 total, 6 up, 6 in Dec 2 05:15:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:01 localhost nova_compute[281854]: 2025-12-02 10:15:01.308 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:01 localhost nova_compute[281854]: 2025-12-02 10:15:01.312 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:15:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:15:01 localhost podman[333916]: 2025-12-02 10:15:01.454697669 +0000 UTC m=+0.087861922 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 2 05:15:01 localhost podman[333917]: 2025-12-02 10:15:01.509457239 +0000 UTC m=+0.142604862 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:15:01 localhost podman[333916]: 2025-12-02 10:15:01.517265896 +0000 UTC m=+0.150430129 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm) Dec 2 05:15:01 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:15:01 localhost podman[333915]: 2025-12-02 10:15:01.554843258 +0000 UTC m=+0.191577647 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 2 05:15:01 localhost podman[333915]: 2025-12-02 10:15:01.563977581 +0000 UTC m=+0.200712010 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:15:01 localhost podman[333917]: 2025-12-02 10:15:01.575319433 +0000 UTC m=+0.208467036 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:15:01 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:15:01 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:15:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:15:01 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2758567838' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:15:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:15:01 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2758567838' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:15:02 localhost nova_compute[281854]: 2025-12-02 10:15:02.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:02 localhost nova_compute[281854]: 2025-12-02 10:15:02.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:15:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:03.058 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:03.058 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:03.059 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e250 e250: 6 total, 6 up, 6 in Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.920 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.921 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.921 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:15:03 localhost nova_compute[281854]: 2025-12-02 10:15:03.922 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:15:04 localhost openstack_network_exporter[242845]: ERROR 10:15:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:04 localhost openstack_network_exporter[242845]: ERROR 10:15:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:04 localhost openstack_network_exporter[242845]: ERROR 10:15:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:15:04 localhost openstack_network_exporter[242845]: ERROR 10:15:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:15:04 localhost openstack_network_exporter[242845]: Dec 2 05:15:04 localhost openstack_network_exporter[242845]: ERROR 10:15:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:15:04 localhost openstack_network_exporter[242845]: Dec 2 05:15:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 2 05:15:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/878020292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 2 05:15:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 2 05:15:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/878020292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.443 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.459 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.460 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.460 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.476 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.476 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.477 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.477 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.478 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:04 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:04 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/154205066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:04 localhost nova_compute[281854]: 2025-12-02 10:15:04.981 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.055 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.056 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:15:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e251 e251: 6 total, 6 up, 6 in Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.247 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.248 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11070MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.249 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.249 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.331 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.331 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.331 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.365 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:15:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:15:05 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/153256759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.808 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:15:05 localhost nova_compute[281854]: 2025-12-02 10:15:05.815 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:15:06 localhost podman[240799]: time="2025-12-02T10:15:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:15:06 localhost nova_compute[281854]: 2025-12-02 10:15:06.051 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:15:06 localhost nova_compute[281854]: 2025-12-02 10:15:06.054 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:15:06 localhost nova_compute[281854]: 2025-12-02 10:15:06.054 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:15:06 localhost podman[240799]: @ - - [02/Dec/2025:10:15:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:15:06 localhost podman[240799]: @ - - [02/Dec/2025:10:15:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19260 "" "Go-http-client/1.1" Dec 2 05:15:06 localhost nova_compute[281854]: 2025-12-02 10:15:06.312 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:06 localhost nova_compute[281854]: 2025-12-02 10:15:06.315 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e252 e252: 6 total, 6 up, 6 in Dec 2 05:15:07 localhost ceph-mgr[288059]: client.0 ms_handle_reset on v2:172.18.0.108:6810/4212177170 Dec 2 05:15:08 localhost neutron_sriov_agent[256494]: 2025-12-02 10:15:08.864 2 INFO neutron.agent.securitygroups_rpc [None req-e4074800-d361-45b9-b812-e8981daf28f3 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:15:09 localhost systemd[1]: tmp-crun.aho2ke.mount: Deactivated successfully. Dec 2 05:15:09 localhost podman[334019]: 2025-12-02 10:15:09.197896684 +0000 UTC m=+0.099332018 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:15:09 localhost podman[334019]: 2025-12-02 10:15:09.212304498 +0000 UTC m=+0.113739822 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Dec 2 05:15:09 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:15:09 localhost neutron_sriov_agent[256494]: 2025-12-02 10:15:09.250 2 INFO neutron.agent.securitygroups_rpc [None req-ee552935-4da7-44ca-8e38-6eb6181199e8 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:09 localhost nova_compute[281854]: 2025-12-02 10:15:09.422 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:09 localhost nova_compute[281854]: 2025-12-02 10:15:09.422 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:09 localhost nova_compute[281854]: 2025-12-02 10:15:09.423 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:11 localhost nova_compute[281854]: 2025-12-02 10:15:11.317 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:11 localhost nova_compute[281854]: 2025-12-02 10:15:11.321 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e253 e253: 6 total, 6 up, 6 in Dec 2 05:15:11 localhost nova_compute[281854]: 2025-12-02 10:15:11.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:11 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:12 localhost nova_compute[281854]: 2025-12-02 10:15:12.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:15:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:15:13 localhost podman[334039]: 2025-12-02 10:15:13.445592451 +0000 UTC m=+0.078819401 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller) Dec 2 05:15:13 localhost podman[334038]: 2025-12-02 10:15:13.507402379 +0000 UTC m=+0.139623742 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:15:13 localhost podman[334039]: 2025-12-02 10:15:13.51421115 +0000 UTC m=+0.147438130 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:15:13 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:15:13 localhost podman[334038]: 2025-12-02 10:15:13.566595366 +0000 UTC m=+0.198816719 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:15:13 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:15:13 localhost neutron_sriov_agent[256494]: 2025-12-02 10:15:13.700 2 INFO neutron.agent.securitygroups_rpc [req-a541e13d-87f6-4580-832f-af5d7aef99a4 req-c15ffc3e-ba6d-409e-8103-3b4ea0d7e66e 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:13 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:15:13.826 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:15:13Z, description=, device_id=e4135ac9-548a-4e8d-99d6-cde8dedb2c77, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5312b3e8-70f6-4e16-95ba-31b46130d41f, ip_allocation=immediate, mac_address=fa:16:3e:77:0c:21, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:14:38Z, description=, dns_domain=, id=8703a229-8c49-443e-95c6-aff62a358434, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1306125232-network, port_security_enabled=True, project_id=d858413a9b01463f96545916d2abe5ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=770, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3448, status=ACTIVE, subnets=['9d626c62-851c-4a11-822f-bd4dadd5e8b1'], tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:14:38Z, vlan_transparent=None, network_id=8703a229-8c49-443e-95c6-aff62a358434, port_security_enabled=True, project_id=d858413a9b01463f96545916d2abe5ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['10785715-ddea-43bb-82fa-9f44a2fb1faa'], standard_attr_id=3579, status=DOWN, tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:15:13Z on network 8703a229-8c49-443e-95c6-aff62a358434#033[00m Dec 2 05:15:14 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 2 addresses Dec 2 05:15:14 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:15:14 localhost podman[334104]: 2025-12-02 10:15:14.062586574 +0000 UTC m=+0.068743363 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 2 05:15:14 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:15:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:15:14.324 263406 INFO neutron.agent.dhcp.agent [None req-d128d369-b0b2-4225-a1b6-72f55a995efa - - - - - -] DHCP configuration for ports {'5312b3e8-70f6-4e16-95ba-31b46130d41f'} is completed#033[00m Dec 2 05:15:14 localhost systemd[1]: tmp-crun.Q3Q78u.mount: Deactivated successfully. Dec 2 05:15:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:15:14.567 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005541914.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:15:13Z, description=, device_id=e4135ac9-548a-4e8d-99d6-cde8dedb2c77, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-volumesbackupstest-instance-296444076, extra_dhcp_opts=[], fixed_ips=[], id=5312b3e8-70f6-4e16-95ba-31b46130d41f, ip_allocation=immediate, mac_address=fa:16:3e:77:0c:21, name=, network_id=8703a229-8c49-443e-95c6-aff62a358434, port_security_enabled=True, project_id=d858413a9b01463f96545916d2abe5ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['10785715-ddea-43bb-82fa-9f44a2fb1faa'], standard_attr_id=3579, status=DOWN, tags=[], tenant_id=d858413a9b01463f96545916d2abe5ab, updated_at=2025-12-02T10:15:14Z on network 8703a229-8c49-443e-95c6-aff62a358434#033[00m Dec 2 05:15:14 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 2 addresses Dec 2 05:15:14 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:15:14 localhost podman[334141]: 2025-12-02 10:15:14.756494145 +0000 UTC m=+0.049591932 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 2 05:15:14 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:15:14 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:15:14.947 263406 INFO neutron.agent.dhcp.agent [None req-93515900-132c-4ee3-b665-942fd11f9c32 - - - - - -] DHCP configuration for ports {'5312b3e8-70f6-4e16-95ba-31b46130d41f'} is completed#033[00m Dec 2 05:15:15 localhost sshd[334163]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:15:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:15 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:15 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.322 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.324 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.325 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.325 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.325 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:15:16 localhost nova_compute[281854]: 2025-12-02 10:15:16.327 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:16 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:16 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:21 localhost nova_compute[281854]: 2025-12-02 10:15:21.326 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:22 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:22 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:24 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:25 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:25 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:26 localhost nova_compute[281854]: 2025-12-02 10:15:26.328 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:27 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:27 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:15:28 localhost podman[334165]: 2025-12-02 10:15:28.458685744 +0000 UTC m=+0.086848296 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 2 05:15:28 localhost podman[334165]: 2025-12-02 10:15:28.474120804 +0000 UTC m=+0.102283356 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 2 05:15:28 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:15:31 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:31 localhost nova_compute[281854]: 2025-12-02 10:15:31.333 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:32 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:32 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:15:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:15:32 localhost podman[334185]: 2025-12-02 10:15:32.450262756 +0000 UTC m=+0.088265243 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, distribution-scope=public) Dec 2 05:15:32 localhost podman[334185]: 2025-12-02 10:15:32.465982395 +0000 UTC m=+0.103984872 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:15:32 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:15:32 localhost systemd[1]: tmp-crun.sGSVw0.mount: Deactivated successfully. Dec 2 05:15:32 localhost podman[334186]: 2025-12-02 10:15:32.553906907 +0000 UTC m=+0.185134764 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:15:32 localhost podman[334186]: 2025-12-02 10:15:32.563946995 +0000 UTC m=+0.195174862 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:15:32 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:15:32 localhost podman[334184]: 2025-12-02 10:15:32.607866466 +0000 UTC m=+0.244418005 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 2 05:15:32 localhost podman[334184]: 2025-12-02 10:15:32.641126333 +0000 UTC m=+0.277677852 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 2 05:15:32 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:15:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e254 e254: 6 total, 6 up, 6 in Dec 2 05:15:33 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch Dec 2 05:15:33 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch Dec 2 05:15:33 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch Dec 2 05:15:33 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"}]': finished Dec 2 05:15:34 localhost openstack_network_exporter[242845]: ERROR 10:15:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:15:34 localhost openstack_network_exporter[242845]: ERROR 10:15:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:34 localhost openstack_network_exporter[242845]: ERROR 10:15:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:15:34 localhost openstack_network_exporter[242845]: ERROR 10:15:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:15:34 localhost openstack_network_exporter[242845]: Dec 2 05:15:34 localhost openstack_network_exporter[242845]: ERROR 10:15:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:15:34 localhost openstack_network_exporter[242845]: Dec 2 05:15:35 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:35 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:35 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:35 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:36 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:15:36 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:15:36 localhost podman[240799]: time="2025-12-02T10:15:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:15:36 localhost podman[240799]: @ - - [02/Dec/2025:10:15:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:15:36 localhost podman[240799]: @ - - [02/Dec/2025:10:15:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 2 05:15:36 localhost nova_compute[281854]: 2025-12-02 10:15:36.336 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:37 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 2 05:15:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 2 05:15:37 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 2 05:15:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Dec 2 05:15:37 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e255 e255: 6 total, 6 up, 6 in Dec 2 05:15:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:15:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:38 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:38 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1723398924' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:15:39 localhost podman[334330]: 2025-12-02 10:15:39.456374101 +0000 UTC m=+0.098473025 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Dec 2 05:15:39 localhost podman[334330]: 2025-12-02 10:15:39.468973227 +0000 UTC m=+0.111072131 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 2 05:15:39 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:15:40 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e256 e256: 6 total, 6 up, 6 in Dec 2 05:15:40 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 2 05:15:40 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 2 05:15:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:40.338 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:15:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:40.339 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:15:40 localhost nova_compute[281854]: 2025-12-02 10:15:40.340 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:41 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:41 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:41 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:41 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e257 e257: 6 total, 6 up, 6 in Dec 2 05:15:41 localhost nova_compute[281854]: 2025-12-02 10:15:41.342 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e258 e258: 6 total, 6 up, 6 in Dec 2 05:15:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:15:43.340 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:15:44 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e259 e259: 6 total, 6 up, 6 in Dec 2 05:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:15:44 localhost podman[334350]: 2025-12-02 10:15:44.507951751 +0000 UTC m=+0.144563864 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:15:44 localhost podman[334349]: 2025-12-02 10:15:44.487443834 +0000 UTC m=+0.129230525 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:15:44 localhost podman[334349]: 2025-12-02 10:15:44.573042185 +0000 UTC m=+0.214828836 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:15:44 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:15:44 localhost podman[334350]: 2025-12-02 10:15:44.590287235 +0000 UTC m=+0.226899358 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:15:44 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:15:45 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:45 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:45 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e260 e260: 6 total, 6 up, 6 in Dec 2 05:15:46 localhost nova_compute[281854]: 2025-12-02 10:15:46.344 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:15:46 localhost nova_compute[281854]: 2025-12-02 10:15:46.345 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:46 localhost nova_compute[281854]: 2025-12-02 10:15:46.345 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:15:46 localhost nova_compute[281854]: 2025-12-02 10:15:46.345 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:15:46 localhost nova_compute[281854]: 2025-12-02 10:15:46.346 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:15:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:47 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e261 e261: 6 total, 6 up, 6 in Dec 2 05:15:47 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 2 05:15:48 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 2 05:15:48 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3835753396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 2 05:15:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:48 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:48 localhost nova_compute[281854]: 2025-12-02 10:15:48.930 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:49 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e262 e262: 6 total, 6 up, 6 in Dec 2 05:15:50 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e263 e263: 6 total, 6 up, 6 in Dec 2 05:15:50 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:51 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e264 e264: 6 total, 6 up, 6 in Dec 2 05:15:51 localhost nova_compute[281854]: 2025-12-02 10:15:51.347 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:51 localhost nova_compute[281854]: 2025-12-02 10:15:51.352 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e265 e265: 6 total, 6 up, 6 in Dec 2 05:15:52 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:52 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:52 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:52 localhost nova_compute[281854]: 2025-12-02 10:15:52.588 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:52 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e266 e266: 6 total, 6 up, 6 in Dec 2 05:15:54 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:54 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:54 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:15:54 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:15:56 localhost nova_compute[281854]: 2025-12-02 10:15:56.350 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:56 localhost nova_compute[281854]: 2025-12-02 10:15:56.355 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:15:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:15:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e267 e267: 6 total, 6 up, 6 in Dec 2 05:15:56 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 2 05:15:56 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 2 05:15:56 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 2 05:15:56 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Dec 2 05:15:57 localhost neutron_sriov_agent[256494]: 2025-12-02 10:15:57.570 2 INFO neutron.agent.securitygroups_rpc [req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 req-bbad3521-a7cd-468f-9368-bc82a5a5c437 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']#033[00m Dec 2 05:15:57 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e268 e268: 6 total, 6 up, 6 in Dec 2 05:15:57 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 1 addresses Dec 2 05:15:57 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:15:57 localhost podman[334415]: 2025-12-02 10:15:57.915243633 +0000 UTC m=+0.061391377 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:15:57 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:15:58 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch Dec 2 05:15:58 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished Dec 2 05:15:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:15:59 localhost podman[334435]: 2025-12-02 10:15:59.453137496 +0000 UTC m=+0.091360626 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:15:59 localhost podman[334435]: 2025-12-02 10:15:59.469117801 +0000 UTC m=+0.107340961 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:15:59 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:15:59 localhost ovn_controller[154505]: 2025-12-02T10:15:59Z|00607|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:15:59 localhost nova_compute[281854]: 2025-12-02 10:15:59.963 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:00 localhost nova_compute[281854]: 2025-12-02 10:16:00.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:01 localhost nova_compute[281854]: 2025-12-02 10:16:01.354 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:01 localhost nova_compute[281854]: 2025-12-02 10:16:01.359 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e269 e269: 6 total, 6 up, 6 in Dec 2 05:16:01 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:16:01 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch Dec 2 05:16:01 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch Dec 2 05:16:01 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished Dec 2 05:16:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e270 e270: 6 total, 6 up, 6 in Dec 2 05:16:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:03.059 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:03.060 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:03.060 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:16:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:16:03 localhost podman[334455]: 2025-12-02 10:16:03.452574758 +0000 UTC m=+0.091055817 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:16:03 localhost podman[334457]: 2025-12-02 10:16:03.51345838 +0000 UTC m=+0.145038386 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:16:03 localhost podman[334457]: 2025-12-02 10:16:03.523194079 +0000 UTC m=+0.154774095 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:16:03 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:16:03 localhost podman[334455]: 2025-12-02 10:16:03.53857538 +0000 UTC m=+0.177056499 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 2 05:16:03 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:16:03 localhost podman[334456]: 2025-12-02 10:16:03.616605559 +0000 UTC m=+0.249269933 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, name=ubi9-minimal) Dec 2 05:16:03 localhost podman[334456]: 2025-12-02 10:16:03.634171067 +0000 UTC m=+0.266835491 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Dec 2 05:16:03 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:16:03 localhost nova_compute[281854]: 2025-12-02 10:16:03.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:03 localhost nova_compute[281854]: 2025-12-02 10:16:03.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:16:03 localhost nova_compute[281854]: 2025-12-02 10:16:03.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:16:03 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e271 e271: 6 total, 6 up, 6 in Dec 2 05:16:04 localhost openstack_network_exporter[242845]: ERROR 10:16:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:16:04 localhost openstack_network_exporter[242845]: ERROR 10:16:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:04 localhost openstack_network_exporter[242845]: ERROR 10:16:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:04 localhost openstack_network_exporter[242845]: ERROR 10:16:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:16:04 localhost openstack_network_exporter[242845]: Dec 2 05:16:04 localhost openstack_network_exporter[242845]: ERROR 10:16:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:16:04 localhost openstack_network_exporter[242845]: Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.099 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.099 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.100 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.100 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.700 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.743 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.743 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.744 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.745 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.901 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.902 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.902 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.902 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:16:04 localhost nova_compute[281854]: 2025-12-02 10:16:04.903 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:16:05 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:16:05 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3357420336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:16:05 localhost dnsmasq[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/addn_hosts - 0 addresses Dec 2 05:16:05 localhost podman[334553]: 2025-12-02 10:16:05.400369215 +0000 UTC m=+0.040155772 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:16:05 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/host Dec 2 05:16:05 localhost dnsmasq-dhcp[333770]: read /var/lib/neutron/dhcp/8703a229-8c49-443e-95c6-aff62a358434/opts Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.405 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:16:05 localhost ovn_controller[154505]: 2025-12-02T10:16:05Z|00608|binding|INFO|Releasing lport f119cdef-0974-4d2c-8acd-8d7464640ca9 from this chassis (sb_readonly=0) Dec 2 05:16:05 localhost kernel: device tapf119cdef-09 left promiscuous mode Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.612 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:05 localhost ovn_controller[154505]: 2025-12-02T10:16:05Z|00609|binding|INFO|Setting lport f119cdef-0974-4d2c-8acd-8d7464640ca9 down in Southbound Dec 2 05:16:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:05.636 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f119cdef-0974-4d2c-8acd-8d7464640ca9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.638 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.639 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:05.640 160221 INFO neutron.agent.ovn.metadata.agent [-] Port f119cdef-0974-4d2c-8acd-8d7464640ca9 in datapath 8703a229-8c49-443e-95c6-aff62a358434 unbound from our chassis#033[00m Dec 2 05:16:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:05.642 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8703a229-8c49-443e-95c6-aff62a358434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:16:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:05.645 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[57e8f51b-7bc4-4eb8-beea-2b23991f8753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.646 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.647 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.831 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.832 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11051MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.833 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.833 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.934 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.935 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.936 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:16:05 localhost nova_compute[281854]: 2025-12-02 10:16:05.989 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:16:06 localhost podman[240799]: time="2025-12-02T10:16:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:16:06 localhost podman[240799]: @ - - [02/Dec/2025:10:16:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:16:06 localhost podman[240799]: @ - - [02/Dec/2025:10:16:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.356 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.368 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:16:06 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2602515044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.464 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:16:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.472 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.533 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.535 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:16:06 localhost nova_compute[281854]: 2025-12-02 10:16:06.535 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:16:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e272 e272: 6 total, 6 up, 6 in Dec 2 05:16:07 localhost ovn_controller[154505]: 2025-12-02T10:16:07Z|00610|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:16:07 localhost nova_compute[281854]: 2025-12-02 10:16:07.208 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:07 localhost podman[334618]: 2025-12-02 10:16:07.798153422 +0000 UTC m=+0.057346328 container kill 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 2 05:16:07 localhost dnsmasq[333770]: exiting on receipt of SIGTERM Dec 2 05:16:07 localhost systemd[1]: libpod-069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5.scope: Deactivated successfully. Dec 2 05:16:07 localhost podman[334632]: 2025-12-02 10:16:07.872864934 +0000 UTC m=+0.055456139 container died 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:16:07 localhost podman[334632]: 2025-12-02 10:16:07.911342819 +0000 UTC m=+0.093933984 container cleanup 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 2 05:16:07 localhost systemd[1]: libpod-conmon-069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5.scope: Deactivated successfully. Dec 2 05:16:07 localhost podman[334633]: 2025-12-02 10:16:07.99766578 +0000 UTC m=+0.177203604 container remove 069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:16:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:16:08.125 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:16:08 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:16:08.373 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:16:08 localhost systemd[1]: var-lib-containers-storage-overlay-6a0ee3792bce35a0d1020085525a61984484526151f073e1f67a4a8079d9209d-merged.mount: Deactivated successfully. Dec 2 05:16:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-069cefe05362f43be01d929639e0342a01388cc1f055e1f2cef05f17f8de56a5-userdata-shm.mount: Deactivated successfully. Dec 2 05:16:08 localhost systemd[1]: run-netns-qdhcp\x2d8703a229\x2d8c49\x2d443e\x2d95c6\x2daff62a358434.mount: Deactivated successfully. Dec 2 05:16:09 localhost nova_compute[281854]: 2025-12-02 10:16:09.537 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:16:10 localhost systemd[1]: tmp-crun.Bizifj.mount: Deactivated successfully. Dec 2 05:16:10 localhost podman[334661]: 2025-12-02 10:16:10.442194833 +0000 UTC m=+0.081344528 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 2 05:16:10 localhost podman[334661]: 2025-12-02 10:16:10.480282739 +0000 UTC m=+0.119432494 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3) Dec 2 05:16:10 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:16:10 localhost nova_compute[281854]: 2025-12-02 10:16:10.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:10 localhost nova_compute[281854]: 2025-12-02 10:16:10.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:11 localhost nova_compute[281854]: 2025-12-02 10:16:11.360 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:11 localhost nova_compute[281854]: 2025-12-02 10:16:11.365 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e273 e273: 6 total, 6 up, 6 in Dec 2 05:16:12 localhost nova_compute[281854]: 2025-12-02 10:16:12.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:13 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e274 e274: 6 total, 6 up, 6 in Dec 2 05:16:13 localhost nova_compute[281854]: 2025-12-02 10:16:13.920 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:14 localhost nova_compute[281854]: 2025-12-02 10:16:14.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:16:15 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e275 e275: 6 total, 6 up, 6 in Dec 2 05:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:16:15 localhost podman[334680]: 2025-12-02 10:16:15.450981255 +0000 UTC m=+0.093185444 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:16:15 localhost podman[334680]: 2025-12-02 10:16:15.459344798 +0000 UTC m=+0.101548997 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:16:15 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:16:15 localhost podman[334681]: 2025-12-02 10:16:15.546301875 +0000 UTC m=+0.185017182 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller) Dec 2 05:16:15 localhost podman[334681]: 2025-12-02 10:16:15.610027873 +0000 UTC m=+0.248743150 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:16:15 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.109 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.110 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.122 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.123 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '384c6f7b-8d6b-4d27-a0b9-5763798a973d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.110220', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efdbe84a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': 'da23f97963522e22b76e0fc36e4c7024e4b8b92f9a25905b80ab2364fa57f085'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.110220', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efdbfcf4-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': '8ca65ce0ea7f80085edf03d1c1a7c026d99bbf930b8e62f6fd92be56fdf48663'}]}, 'timestamp': '2025-12-02 10:16:16.124050', '_unique_id': '811abed902d14843b1a2225e2ddde55c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.125 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.127 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.131 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aafa79b0-20ec-49b8-ac11-c48861938f26', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.127512', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efdd3556-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '79b499edfd2810cab9381e0b08e3148894980a81023c70f29f4052d9b79f01d8'}]}, 'timestamp': '2025-12-02 10:16:16.132072', '_unique_id': 'c78aa8d947a04894bb9fa64c4858a2b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.134 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.134 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.134 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.163 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.164 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '45c4b23d-fac3-4cdf-8e05-f57dcf1379da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.134657', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efe215f8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '38f76dbe30c0b80738e4be4a3d6cc7a410479922a2113f769c59df432cb7149b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.134657', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efe2276e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '87cc43201fa40cf57585b4cd198bc85a9f86d0fd5f0979311b59046d1e240535'}]}, 'timestamp': '2025-12-02 10:16:16.164507', '_unique_id': '95668f3ec59a4ad7af61f8dc7863e4c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.165 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.166 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.183 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f8b4f2a-c978-4bdd-a8d5-2c57318f55bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:16:16.166861', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'efe52d74-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.402767421, 'message_signature': '5d1a2ff275b30f4a18ad6956150165a25c90224870a96e0f84a5aab0e4476087'}]}, 'timestamp': '2025-12-02 10:16:16.184270', '_unique_id': '4863cdb3e67b4d6288402be555ca31cc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.185 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.186 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.186 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ce7e66f9-043a-499c-9dcb-2364ff0d3f96', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.186462', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe596b0-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '1847052e504a67c27440598c222fef98c734ce084cd427e7d23ab58da4b42f19'}]}, 'timestamp': '2025-12-02 10:16:16.186985', '_unique_id': '7bc979690889437aa88caaeed0863f8a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.188 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.189 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.190 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51d15e19-739d-4c28-a6f8-05d36ba3209d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.190010', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe61f36-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '6969806eb63c6cd95e9cc01956ceb79e92c6f02aa458a3b0157fdee546ffc247'}]}, 'timestamp': '2025-12-02 10:16:16.190469', '_unique_id': '1ee88e6823104cc1823451612825f3e0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.191 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.192 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.192 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 20280000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3dc50bf0-7364-45a9-8126-7e059a8dac09', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20280000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:16:16.192589', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'efe68502-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.402767421, 'message_signature': '0d2eefe7d5bcb8ef6efface0b99b8509c5f8ca80d1abce7497cbe0266f403004'}]}, 'timestamp': '2025-12-02 10:16:16.193061', '_unique_id': '88f2c79615004ca4ac224cb0102f65d6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.195 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.195 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd5e8859-9003-4fc6-81df-3abb51f11464', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.195146', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe6e768-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': 'b3a9cbc57f64a0bf6da429c5f8e24338ab87f0d0848c08d9561312b23368428c'}]}, 'timestamp': '2025-12-02 10:16:16.195591', '_unique_id': 'ba4a41dce4ea464ebe7fa417dbae916c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.197 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '36f9a57d-7aeb-4891-9a70-512bfefec90c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.197894', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe75324-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': 'b3b1ed57b873574ee573fc15f3d434cefa5f68aa86ed904cc86dc5861a432b8c'}]}, 'timestamp': '2025-12-02 10:16:16.198353', '_unique_id': '21a4dcf685c5429a86712d029231781c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.200 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '45e17246-7dbf-4e6d-987f-f54fdc532fa7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.200450', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe7b7d8-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': 'eb34a09fb8a3bbe12aeb65f50ea2a8034f41fbc9075a63a3d044a0a1edabe9cb'}]}, 'timestamp': '2025-12-02 10:16:16.200930', '_unique_id': '571b02040beb46b8af26bc2c9ff5f785'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '685b3a15-7521-43c7-bd16-709d2ee48664', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.203419', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe82e0c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '4cafdf1b9c617bd313368f9faca267ab30185c7f2fd15081e7115dd03957d7c8'}]}, 'timestamp': '2025-12-02 10:16:16.203962', '_unique_id': 'a76258976aa648e7bd43f8d375b41443'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.204 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd2d00319-44e4-4688-952f-bb25449c9f6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.206143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efe894e6-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '5eefbdf6f393195d64391f03b801d79dda1f5201138ca89e85b0d7aeb30377bc'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.206143', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efe8a634-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '19f1dd4811754367317b1b1ac451769e2bd462af8702ff9540e28d2f4bb4d29c'}]}, 'timestamp': '2025-12-02 10:16:16.207004', '_unique_id': 'd99195e5a4fe49bca882cc5864728855'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.209 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cb2dcc13-b7ce-47f1-a8f8-88165d7596cf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.209162', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efe90b42-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '4b274c2fa3c77937cbbcb8cd525fbca3649941b597c47b07b82d93b3fac71398'}]}, 'timestamp': '2025-12-02 10:16:16.209768', '_unique_id': 'e9476f4e79f2484c937c5bcb63c4d476'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.211 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ec5736f8-1045-4c45-9208-b633685476e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.211931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efe97730-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '74f442e2d211bed5233d85972d2aced14c95535970b6d99a9665dd16c9354864'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.211931', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efe98a4a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '221d9d623041a7427a4e6587e55e9c1b28b851a61d3a6a098d097f6767281d7f'}]}, 'timestamp': '2025-12-02 10:16:16.212850', '_unique_id': 'd8c3daa17340452bac210156ec255414'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '55bc436e-da75-4641-86ca-7f574dea5888', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.214988', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efe9ee9a-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': 'f0d15cc03f50e3f35eefe9fbfbb82f84f736fecc3efa8d20aef47e695c93f60b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.214988', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efe9ffc0-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '27eee6abdb1a90903d04ab24d26902fbbf244faa4f814ff6054974decfff9777'}]}, 'timestamp': '2025-12-02 10:16:16.215853', '_unique_id': '6c1f224ed1554704b481dbdde6db971a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.217 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.218 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'acc41f9a-34d7-4ae8-a912-cf0045b3e391', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.218108', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efea6898-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': '748ae6c16a3ec738c907d2b3dfa72a2d685326c294cc245b6a7ef9781a5b7c1e'}]}, 'timestamp': '2025-12-02 10:16:16.218590', '_unique_id': '494f4265192a45a0b888701bc9e20fc3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.221 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ff6c4df2-1847-4e45-a836-b2df467b4aee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:16:16.221036', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': 'efeadb02-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.346596115, 'message_signature': 'e46dfcb4b9d755d0b02ad9e71e048bafcb544b7077262443c7965dcc10f2c5b6'}]}, 'timestamp': '2025-12-02 10:16:16.221492', '_unique_id': '36f2a5cabf704cb5b8b3afac3a7901d8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.224 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '801d4154-d05b-4c59-87fa-1321b5cbda0c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.223753', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efeb454c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': 'ec8dd40f05aef34a17e002aa55fe10610bbe9c37fddec32a0d46c4c6ed558018'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.223753', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efeb56c2-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': '4e2e0682f4f650743bad3303b3d34c45f6fd4744577d84bf709d4cddbb1616e1'}]}, 'timestamp': '2025-12-02 10:16:16.224602', '_unique_id': 'f8bee13cb9dd4865b6f12e23f268802c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.226 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.226 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e150967-5638-4182-81c7-482221daf55b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.226159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efeba3c0-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': 'f59ddec9aa546b8fbc845d9eee34699841a7238e7b92331743fe76ed31da73e8'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.226159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efebaf82-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.329293654, 'message_signature': 'b445f1e065b196feba06f7a228a2fcce8d66281d1740d5ac0e77a60f15514596'}]}, 'timestamp': '2025-12-02 10:16:16.226830', '_unique_id': 'ea119117ca404d388a22f1f281d5ec71'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.227 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.228 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.228 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.228 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.228 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e29952dc-92a4-4fb4-899e-002938535753', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.228561', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efebff8c-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': 'fa3a0dc8e08996646b0eea3db39bb212c8a5a5de7ae64933f1ba3fbdbbe1ecf2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.228561', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efec0a0e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': 'e594fc089842fec66f905830a8b08fd8623c218fef707d9018b91bad91aab92c'}]}, 'timestamp': '2025-12-02 10:16:16.229146', '_unique_id': '8a70c2e6bbfb4571a51d1a8c6b3a2cc6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.229 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.230 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.230 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.230 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c571a32-4bc9-49ae-ad8d-cd79205d7c4c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:16:16.230473', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'efec492e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': 'd359e676d9291ceac028f00c1a423884785aec31883c80ceba6ab9d7dcfb4d1b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:16:16.230473', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'efec532e-cf67-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12738.353735384, 'message_signature': '862e08512396c44825869055de50d3e8c99adc63338db03ebe64d8a3cdd8c1b4'}]}, 'timestamp': '2025-12-02 10:16:16.231017', '_unique_id': '3481efa4a43f41798712622702176463'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:16:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:16:16.231 12 ERROR oslo_messaging.notify.messaging Dec 2 05:16:16 localhost nova_compute[281854]: 2025-12-02 10:16:16.362 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:16 localhost nova_compute[281854]: 2025-12-02 10:16:16.368 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:21 localhost nova_compute[281854]: 2025-12-02 10:16:21.364 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:21 localhost nova_compute[281854]: 2025-12-02 10:16:21.370 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e276 e276: 6 total, 6 up, 6 in Dec 2 05:16:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:16:25 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4681 writes, 36K keys, 4681 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s#012Cumulative WAL: 4681 writes, 4681 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2537 writes, 13K keys, 2537 commit groups, 1.0 writes per commit group, ingest: 19.08 MB, 0.03 MB/s#012Interval WAL: 2537 writes, 2537 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 117.0 0.35 0.11 16 0.022 0 0 0.0 0.0#012 L6 1/0 18.03 MB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 6.2 181.7 166.7 1.50 0.65 15 0.100 193K 7757 0.0 0.0#012 Sum 1/0 18.03 MB 0.0 0.3 0.0 0.2 0.3 0.1 0.0 7.2 147.6 157.4 1.85 0.76 31 0.060 193K 7757 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 11.8 163.0 165.8 0.77 0.35 14 0.055 95K 3783 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 0.0 181.7 166.7 1.50 0.65 15 0.100 193K 7757 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 124.0 0.33 0.11 15 0.022 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.02 0.00 1 0.019 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.24 MB/s write, 0.27 GB read, 0.23 MB/s read, 1.8 seconds#012Interval compaction: 0.13 GB write, 0.21 MB/s write, 0.12 GB read, 0.21 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563183c47350#2 capacity: 304.00 MB usage: 38.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000264 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2537,36.92 MB,12.1445%) FilterBlock(31,565.42 KB,0.181635%) IndexBlock(31,738.52 KB,0.237239%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 2 05:16:26 localhost nova_compute[281854]: 2025-12-02 10:16:26.366 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:26 localhost nova_compute[281854]: 2025-12-02 10:16:26.372 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:16:30 localhost podman[334728]: 2025-12-02 10:16:30.438806675 +0000 UTC m=+0.082365696 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Dec 2 05:16:30 localhost podman[334728]: 2025-12-02 10:16:30.449742656 +0000 UTC m=+0.093301667 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:16:30 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:16:31 localhost nova_compute[281854]: 2025-12-02 10:16:31.368 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:31 localhost nova_compute[281854]: 2025-12-02 10:16:31.375 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:32 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e277 e277: 6 total, 6 up, 6 in Dec 2 05:16:34 localhost openstack_network_exporter[242845]: ERROR 10:16:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:16:34 localhost openstack_network_exporter[242845]: ERROR 10:16:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:34 localhost openstack_network_exporter[242845]: ERROR 10:16:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:16:34 localhost openstack_network_exporter[242845]: ERROR 10:16:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:16:34 localhost openstack_network_exporter[242845]: Dec 2 05:16:34 localhost openstack_network_exporter[242845]: ERROR 10:16:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:16:34 localhost openstack_network_exporter[242845]: Dec 2 05:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:16:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:16:34 localhost podman[334746]: 2025-12-02 10:16:34.440363723 +0000 UTC m=+0.078000489 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 2 05:16:34 localhost podman[334747]: 2025-12-02 10:16:34.507336688 +0000 UTC m=+0.139438277 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm) Dec 2 05:16:34 localhost podman[334747]: 2025-12-02 10:16:34.546193904 +0000 UTC m=+0.178295503 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public) Dec 2 05:16:34 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:16:34 localhost podman[334748]: 2025-12-02 10:16:34.560414513 +0000 UTC m=+0.189442640 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:16:34 localhost podman[334748]: 2025-12-02 10:16:34.572160826 +0000 UTC m=+0.201188963 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:16:34 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:16:34 localhost podman[334746]: 2025-12-02 10:16:34.628923938 +0000 UTC m=+0.266560704 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:16:34 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:16:35 localhost systemd[1]: tmp-crun.dcQR2B.mount: Deactivated successfully. Dec 2 05:16:36 localhost podman[240799]: time="2025-12-02T10:16:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:16:36 localhost podman[240799]: @ - - [02/Dec/2025:10:16:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:16:36 localhost podman[240799]: @ - - [02/Dec/2025:10:16:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18776 "" "Go-http-client/1.1" Dec 2 05:16:36 localhost nova_compute[281854]: 2025-12-02 10:16:36.371 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:36 localhost nova_compute[281854]: 2025-12-02 10:16:36.376 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:36 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:16:36 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:16:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:16:38 localhost ovn_controller[154505]: 2025-12-02T10:16:38Z|00611|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Dec 2 05:16:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:40.683 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:16:40 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:40.684 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:16:40 localhost nova_compute[281854]: 2025-12-02 10:16:40.724 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:16:41 localhost nova_compute[281854]: 2025-12-02 10:16:41.374 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:41 localhost nova_compute[281854]: 2025-12-02 10:16:41.379 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:41 localhost podman[334888]: 2025-12-02 10:16:41.454223216 +0000 UTC m=+0.092207728 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:16:41 localhost podman[334888]: 2025-12-02 10:16:41.466791011 +0000 UTC m=+0.104775503 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:16:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:41 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:16:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e278 e278: 6 total, 6 up, 6 in Dec 2 05:16:42 localhost ovn_metadata_agent[160216]: 2025-12-02 10:16:42.685 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:16:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:16:46 localhost nova_compute[281854]: 2025-12-02 10:16:46.378 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:46 localhost nova_compute[281854]: 2025-12-02 10:16:46.381 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:46 localhost podman[334908]: 2025-12-02 10:16:46.456075028 +0000 UTC m=+0.090541047 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 2 05:16:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:46 localhost podman[334908]: 2025-12-02 10:16:46.495009201 +0000 UTC m=+0.129475180 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:16:46 localhost systemd[1]: tmp-crun.oIbzSX.mount: Deactivated successfully. Dec 2 05:16:46 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:16:46 localhost podman[334907]: 2025-12-02 10:16:46.516775475 +0000 UTC m=+0.152294763 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:16:46 localhost podman[334907]: 2025-12-02 10:16:46.551255969 +0000 UTC m=+0.186775167 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:16:46 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:16:50 localhost sshd[334954]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:16:51 localhost nova_compute[281854]: 2025-12-02 10:16:51.380 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:51 localhost nova_compute[281854]: 2025-12-02 10:16:51.384 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.864079) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611864147, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2830, "num_deletes": 263, "total_data_size": 4018992, "memory_usage": 4084312, "flush_reason": "Manual Compaction"} Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611878676, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 2166306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34313, "largest_seqno": 37138, "table_properties": {"data_size": 2156116, "index_size": 6055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27564, "raw_average_key_size": 22, "raw_value_size": 2133473, "raw_average_value_size": 1769, "num_data_blocks": 260, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670490, "oldest_key_time": 1764670490, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14638 microseconds, and 6277 cpu microseconds. Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.878724) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 2166306 bytes OK Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.878749) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.880838) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.880859) EVENT_LOG_v1 {"time_micros": 1764670611880853, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.880877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 4005507, prev total WAL file size 4006256, number of live WAL files 2. Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353038' seq:0, type:0; will stop at (end) Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(2115KB)], [57(18MB)] Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611882034, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 21070163, "oldest_snapshot_seqno": -1} Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14714 keys, 19405484 bytes, temperature: kUnknown Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611998895, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 19405484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19319569, "index_size": 48054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36805, "raw_key_size": 391841, "raw_average_key_size": 26, "raw_value_size": 19067858, "raw_average_value_size": 1295, "num_data_blocks": 1812, "num_entries": 14714, "num_filter_entries": 14714, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:51 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.999270) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 19405484 bytes Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.001805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.1 rd, 165.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 18.0 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(18.7) write-amplify(9.0) OK, records in: 15204, records dropped: 490 output_compression: NoCompression Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.001833) EVENT_LOG_v1 {"time_micros": 1764670612001821, "job": 34, "event": "compaction_finished", "compaction_time_micros": 116983, "compaction_time_cpu_micros": 53611, "output_level": 6, "num_output_files": 1, "total_output_size": 19405484, "num_input_records": 15204, "num_output_records": 14714, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612002262, "job": 34, "event": "table_file_deletion", "file_number": 59} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612004849, "job": 34, "event": "table_file_deletion", "file_number": 57} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.004950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.004956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.004959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.004962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.004965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.005370) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612005459, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 267, "num_deletes": 251, "total_data_size": 23050, "memory_usage": 28352, "flush_reason": "Manual Compaction"} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612008136, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 14027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37143, "largest_seqno": 37405, "table_properties": {"data_size": 12221, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5023, "raw_average_key_size": 19, "raw_value_size": 8710, "raw_average_value_size": 33, "num_data_blocks": 2, "num_entries": 262, "num_filter_entries": 262, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670611, "oldest_key_time": 1764670611, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 2803 microseconds, and 1106 cpu microseconds. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.008180) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 14027 bytes OK Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.008200) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.010696) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.010717) EVENT_LOG_v1 {"time_micros": 1764670612010710, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.010738) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 20978, prev total WAL file size 29535, number of live WAL files 2. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.011150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(13KB)], [60(18MB)] Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612011194, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19419511, "oldest_snapshot_seqno": -1} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14465 keys, 17859584 bytes, temperature: kUnknown Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612125928, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 17859584, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17777368, "index_size": 44931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 387110, "raw_average_key_size": 26, "raw_value_size": 17532042, "raw_average_value_size": 1212, "num_data_blocks": 1676, "num_entries": 14465, "num_filter_entries": 14465, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.126222) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 17859584 bytes Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.128700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 18.5 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(2657.7) write-amplify(1273.2) OK, records in: 14976, records dropped: 511 output_compression: NoCompression Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.128730) EVENT_LOG_v1 {"time_micros": 1764670612128717, "job": 36, "event": "compaction_finished", "compaction_time_micros": 114822, "compaction_time_cpu_micros": 48423, "output_level": 6, "num_output_files": 1, "total_output_size": 17859584, "num_input_records": 14976, "num_output_records": 14465, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612128891, "job": 36, "event": "table_file_deletion", "file_number": 62} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612131969, "job": 36, "event": "table_file_deletion", "file_number": 60} Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.011086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.132046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.132052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.132053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.132054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:52 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:16:52.132055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:16:56 localhost nova_compute[281854]: 2025-12-02 10:16:56.384 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:56 localhost nova_compute[281854]: 2025-12-02 10:16:56.386 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:16:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:17:01 localhost nova_compute[281854]: 2025-12-02 10:17:01.386 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:01 localhost nova_compute[281854]: 2025-12-02 10:17:01.392 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:01 localhost podman[334956]: 2025-12-02 10:17:01.449375136 +0000 UTC m=+0.092217542 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:17:01 localhost podman[334956]: 2025-12-02 10:17:01.465078827 +0000 UTC m=+0.107921223 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:17:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:01 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:17:02 localhost nova_compute[281854]: 2025-12-02 10:17:02.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:02 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e279 e279: 6 total, 6 up, 6 in Dec 2 05:17:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:03.060 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:03.061 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:03.062 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:03 localhost nova_compute[281854]: 2025-12-02 10:17:03.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:03 localhost nova_compute[281854]: 2025-12-02 10:17:03.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:17:04 localhost openstack_network_exporter[242845]: ERROR 10:17:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:17:04 localhost openstack_network_exporter[242845]: ERROR 10:17:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:04 localhost openstack_network_exporter[242845]: ERROR 10:17:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:04 localhost openstack_network_exporter[242845]: ERROR 10:17:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:17:04 localhost openstack_network_exporter[242845]: Dec 2 05:17:04 localhost openstack_network_exporter[242845]: ERROR 10:17:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:17:04 localhost openstack_network_exporter[242845]: Dec 2 05:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:17:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:17:05 localhost systemd[1]: tmp-crun.3Ti2hZ.mount: Deactivated successfully. Dec 2 05:17:05 localhost podman[334976]: 2025-12-02 10:17:05.717361147 +0000 UTC m=+0.352302942 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41) Dec 2 05:17:05 localhost podman[334977]: 2025-12-02 10:17:05.752587611 +0000 UTC m=+0.383380265 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:17:05 localhost podman[334977]: 2025-12-02 10:17:05.784359473 +0000 UTC m=+0.415152107 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:17:05 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.828 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:17:05 localhost podman[334976]: 2025-12-02 10:17:05.85368415 +0000 UTC m=+0.488625985 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:17:05 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:17:05 localhost podman[334975]: 2025-12-02 10:17:05.862790275 +0000 UTC m=+0.501805609 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.930 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.930 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.930 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:17:05 localhost nova_compute[281854]: 2025-12-02 10:17:05.931 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:17:05 localhost podman[334975]: 2025-12-02 10:17:05.942470949 +0000 UTC m=+0.581486264 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:17:05 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:17:06 localhost podman[240799]: time="2025-12-02T10:17:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:17:06 localhost podman[240799]: @ - - [02/Dec/2025:10:17:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:17:06 localhost podman[240799]: @ - - [02/Dec/2025:10:17:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18784 "" "Go-http-client/1.1" Dec 2 05:17:06 localhost nova_compute[281854]: 2025-12-02 10:17:06.397 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:06 localhost systemd[1]: tmp-crun.5jA9vy.mount: Deactivated successfully. Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.083 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.114 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.115 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.116 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.205 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.206 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.206 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.207 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.207 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:17:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:17:07 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3859412889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.667 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.770 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:17:07 localhost nova_compute[281854]: 2025-12-02 10:17:07.771 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.006 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.007 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11051MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.007 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.008 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.175 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.176 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.176 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.210 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:17:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:17:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/477887898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.671 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.677 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.744 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.747 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:17:08 localhost nova_compute[281854]: 2025-12-02 10:17:08.748 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:17:09 localhost nova_compute[281854]: 2025-12-02 10:17:09.460 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:17:10 localhost ceph-osd[31622]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 20K writes, 78K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 20K writes, 7167 syncs, 2.83 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 33.24 MB, 0.06 MB/s#012Interval WAL: 12K writes, 5261 syncs, 2.33 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:17:10 localhost nova_compute[281854]: 2025-12-02 10:17:10.828 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:11 localhost nova_compute[281854]: 2025-12-02 10:17:11.396 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:11 localhost nova_compute[281854]: 2025-12-02 10:17:11.401 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e280 e280: 6 total, 6 up, 6 in Dec 2 05:17:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:17:12 localhost podman[335077]: 2025-12-02 10:17:12.434110881 +0000 UTC m=+0.074614720 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:17:12 localhost podman[335077]: 2025-12-02 10:17:12.446099913 +0000 UTC m=+0.086603792 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:17:12 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:17:12 localhost nova_compute[281854]: 2025-12-02 10:17:12.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:14 localhost nova_compute[281854]: 2025-12-02 10:17:14.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 2 05:17:15 localhost ceph-osd[32582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.2 total, 600.0 interval#012Cumulative writes: 24K writes, 93K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 24K writes, 8890 syncs, 2.79 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 53K keys, 14K commit groups, 1.0 writes per commit group, ingest: 39.13 MB, 0.07 MB/s#012Interval WAL: 14K writes, 6185 syncs, 2.36 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 2 05:17:15 localhost nova_compute[281854]: 2025-12-02 10:17:15.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:17:16 localhost nova_compute[281854]: 2025-12-02 10:17:16.399 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:16 localhost nova_compute[281854]: 2025-12-02 10:17:16.401 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:17:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:17:17 localhost systemd[1]: tmp-crun.jZaVWy.mount: Deactivated successfully. Dec 2 05:17:17 localhost podman[335098]: 2025-12-02 10:17:17.462164581 +0000 UTC m=+0.099290141 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:17:17 localhost podman[335097]: 2025-12-02 10:17:17.54832231 +0000 UTC m=+0.189459028 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:17:17 localhost podman[335098]: 2025-12-02 10:17:17.579953677 +0000 UTC m=+0.217079247 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 2 05:17:17 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:17:17 localhost podman[335097]: 2025-12-02 10:17:17.635905797 +0000 UTC m=+0.277042515 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:17:17 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.402 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.404 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.405 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:21 localhost nova_compute[281854]: 2025-12-02 10:17:21.407 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:23 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e281 e281: 6 total, 6 up, 6 in Dec 2 05:17:26 localhost nova_compute[281854]: 2025-12-02 10:17:26.405 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:27 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e282 e282: 6 total, 6 up, 6 in Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.409 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.411 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.412 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.412 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.453 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:31 localhost nova_compute[281854]: 2025-12-02 10:17:31.454 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e283 e283: 6 total, 6 up, 6 in Dec 2 05:17:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:17:32 localhost systemd[1]: tmp-crun.ofbFxu.mount: Deactivated successfully. Dec 2 05:17:32 localhost podman[335147]: 2025-12-02 10:17:32.4632906 +0000 UTC m=+0.095621273 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm) Dec 2 05:17:32 localhost podman[335147]: 2025-12-02 10:17:32.476872725 +0000 UTC m=+0.109203388 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 2 05:17:32 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:17:34 localhost openstack_network_exporter[242845]: ERROR 10:17:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:17:34 localhost openstack_network_exporter[242845]: Dec 2 05:17:34 localhost openstack_network_exporter[242845]: ERROR 10:17:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:34 localhost openstack_network_exporter[242845]: ERROR 10:17:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:17:34 localhost openstack_network_exporter[242845]: ERROR 10:17:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:17:34 localhost openstack_network_exporter[242845]: ERROR 10:17:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:17:34 localhost openstack_network_exporter[242845]: Dec 2 05:17:36 localhost podman[240799]: time="2025-12-02T10:17:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:17:36 localhost podman[240799]: @ - - [02/Dec/2025:10:17:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:17:36 localhost podman[240799]: @ - - [02/Dec/2025:10:17:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18786 "" "Go-http-client/1.1" Dec 2 05:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:17:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:17:36 localhost nova_compute[281854]: 2025-12-02 10:17:36.456 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:36 localhost nova_compute[281854]: 2025-12-02 10:17:36.460 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:36 localhost systemd[1]: tmp-crun.JeBZfQ.mount: Deactivated successfully. Dec 2 05:17:36 localhost podman[335167]: 2025-12-02 10:17:36.468208666 +0000 UTC m=+0.096510108 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.) Dec 2 05:17:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:36 localhost podman[335166]: 2025-12-02 10:17:36.508239688 +0000 UTC m=+0.137217458 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 2 05:17:36 localhost podman[335166]: 2025-12-02 10:17:36.542220498 +0000 UTC m=+0.171198308 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 2 05:17:36 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:17:36 localhost podman[335168]: 2025-12-02 10:17:36.565396779 +0000 UTC m=+0.192685673 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:17:36 localhost podman[335167]: 2025-12-02 10:17:36.579723774 +0000 UTC m=+0.208025156 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc.) Dec 2 05:17:36 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:17:36 localhost podman[335168]: 2025-12-02 10:17:36.600035108 +0000 UTC m=+0.227323982 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:17:36 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:17:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 e284: 6 total, 6 up, 6 in Dec 2 05:17:37 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:17:37 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.462 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.465 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.465 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.465 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.493 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:41 localhost nova_compute[281854]: 2025-12-02 10:17:41.495 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:17:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:17:43 localhost podman[335309]: 2025-12-02 10:17:43.430873708 +0000 UTC m=+0.071032904 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:17:43 localhost podman[335309]: 2025-12-02 10:17:43.438988585 +0000 UTC m=+0.079147751 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible) Dec 2 05:17:43 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:17:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:46 localhost nova_compute[281854]: 2025-12-02 10:17:46.494 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:46 localhost nova_compute[281854]: 2025-12-02 10:17:46.496 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:17:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:17:48 localhost podman[335330]: 2025-12-02 10:17:48.447496893 +0000 UTC m=+0.083441298 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 05:17:48 localhost podman[335329]: 2025-12-02 10:17:48.488701697 +0000 UTC m=+0.128671909 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:17:48 localhost podman[335329]: 2025-12-02 10:17:48.503815022 +0000 UTC m=+0.143785224 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:17:48 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:17:48 localhost podman[335330]: 2025-12-02 10:17:48.541185833 +0000 UTC m=+0.177130298 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:17:48 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:17:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.497 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.499 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.499 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.499 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.536 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.537 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:17:51 localhost nova_compute[281854]: 2025-12-02 10:17:51.712 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:51.714 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:17:51 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:51.715 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:17:53 localhost ovn_metadata_agent[160216]: 2025-12-02 10:17:53.718 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:17:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:17:56 localhost nova_compute[281854]: 2025-12-02 10:17:56.538 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:17:56 localhost nova_compute[281854]: 2025-12-02 10:17:56.539 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.541 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.542 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.542 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.543 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.577 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:01 localhost nova_compute[281854]: 2025-12-02 10:18:01.578 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.927411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681927487, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1295, "num_deletes": 259, "total_data_size": 1454330, "memory_usage": 1481008, "flush_reason": "Manual Compaction"} Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681944795, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 950051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37410, "largest_seqno": 38700, "table_properties": {"data_size": 944687, "index_size": 2707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13018, "raw_average_key_size": 20, "raw_value_size": 933281, "raw_average_value_size": 1472, "num_data_blocks": 118, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670612, "oldest_key_time": 1764670612, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 17441 microseconds, and 4221 cpu microseconds. Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.944859) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 950051 bytes OK Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.944882) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.946902) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.946923) EVENT_LOG_v1 {"time_micros": 1764670681946916, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.946945) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1447898, prev total WAL file size 1448222, number of live WAL files 2. Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.947685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353233' seq:72057594037927935, type:22 .. '6C6F676D0034373735' seq:0, type:0; will stop at (end) Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(927KB)], [63(17MB)] Dec 2 05:18:01 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681947743, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18809635, "oldest_snapshot_seqno": -1} Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14561 keys, 18691478 bytes, temperature: kUnknown Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682050305, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 18691478, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18606751, "index_size": 47250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 390557, "raw_average_key_size": 26, "raw_value_size": 18357911, "raw_average_value_size": 1260, "num_data_blocks": 1769, "num_entries": 14561, "num_filter_entries": 14561, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.050734) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 18691478 bytes Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.052274) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.2 rd, 182.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.0 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(39.5) write-amplify(19.7) OK, records in: 15099, records dropped: 538 output_compression: NoCompression Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.052304) EVENT_LOG_v1 {"time_micros": 1764670682052291, "job": 38, "event": "compaction_finished", "compaction_time_micros": 102658, "compaction_time_cpu_micros": 54729, "output_level": 6, "num_output_files": 1, "total_output_size": 18691478, "num_input_records": 15099, "num_output_records": 14561, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682052577, "job": 38, "event": "table_file_deletion", "file_number": 65} Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682055127, "job": 38, "event": "table_file_deletion", "file_number": 63} Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:01.947510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.055203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.055209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.055212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.055215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:18:02.055217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:18:02 localhost nova_compute[281854]: 2025-12-02 10:18:02.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:03.061 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:03.061 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:03.062 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:18:03 localhost podman[335375]: 2025-12-02 10:18:03.437924543 +0000 UTC m=+0.075113884 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:18:03 localhost podman[335375]: 2025-12-02 10:18:03.452087662 +0000 UTC m=+0.089277033 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Dec 2 05:18:03 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:18:04 localhost openstack_network_exporter[242845]: ERROR 10:18:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:04 localhost openstack_network_exporter[242845]: ERROR 10:18:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:04 localhost openstack_network_exporter[242845]: ERROR 10:18:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:18:04 localhost openstack_network_exporter[242845]: ERROR 10:18:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:18:04 localhost openstack_network_exporter[242845]: Dec 2 05:18:04 localhost openstack_network_exporter[242845]: ERROR 10:18:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:18:04 localhost openstack_network_exporter[242845]: Dec 2 05:18:04 localhost nova_compute[281854]: 2025-12-02 10:18:04.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:04 localhost nova_compute[281854]: 2025-12-02 10:18:04.827 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:18:06 localhost podman[240799]: time="2025-12-02T10:18:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:18:06 localhost podman[240799]: @ - - [02/Dec/2025:10:18:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:18:06 localhost podman[240799]: @ - - [02/Dec/2025:10:18:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18789 "" "Go-http-client/1.1" Dec 2 05:18:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.578 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.580 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.852 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.853 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.853 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.854 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:18:06 localhost nova_compute[281854]: 2025-12-02 10:18:06.854 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:18:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:18:07 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2957551482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.290 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.351 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.352 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:18:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:18:07 localhost podman[335415]: 2025-12-02 10:18:07.459458502 +0000 UTC m=+0.088078422 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:18:07 localhost podman[335417]: 2025-12-02 10:18:07.502564547 +0000 UTC m=+0.119200156 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:18:07 localhost podman[335417]: 2025-12-02 10:18:07.516537051 +0000 UTC m=+0.133172670 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 2 05:18:07 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:18:07 localhost podman[335415]: 2025-12-02 10:18:07.590533274 +0000 UTC m=+0.219153134 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:07 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.620 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.622 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11026MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:18:07 localhost podman[335416]: 2025-12-02 10:18:07.570861507 +0000 UTC m=+0.196833296 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.622 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.622 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:07 localhost podman[335416]: 2025-12-02 10:18:07.651076647 +0000 UTC m=+0.277048396 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, version=9.6, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41) Dec 2 05:18:07 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.967 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.968 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:18:07 localhost nova_compute[281854]: 2025-12-02 10:18:07.968 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.142 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:18:08 localhost systemd[1]: tmp-crun.84wnyh.mount: Deactivated successfully. Dec 2 05:18:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:18:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/893749530' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.571 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.579 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.594 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.597 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.598 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.599 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:08 localhost nova_compute[281854]: 2025-12-02 10:18:08.600 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 2 05:18:09 localhost nova_compute[281854]: 2025-12-02 10:18:09.614 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:09 localhost nova_compute[281854]: 2025-12-02 10:18:09.614 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:18:09 localhost nova_compute[281854]: 2025-12-02 10:18:09.615 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:18:10 localhost nova_compute[281854]: 2025-12-02 10:18:10.169 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:18:10 localhost nova_compute[281854]: 2025-12-02 10:18:10.170 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:18:10 localhost nova_compute[281854]: 2025-12-02 10:18:10.170 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:18:10 localhost nova_compute[281854]: 2025-12-02 10:18:10.170 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.325 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.366 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.367 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.367 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.368 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.368 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.369 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.391 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 2 05:18:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.581 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:18:11 localhost nova_compute[281854]: 2025-12-02 10:18:11.584 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:18:14 localhost podman[335495]: 2025-12-02 10:18:14.439676688 +0000 UTC m=+0.080855578 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:18:14 localhost podman[335495]: 2025-12-02 10:18:14.45505575 +0000 UTC m=+0.096234620 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true) Dec 2 05:18:14 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:18:14 localhost nova_compute[281854]: 2025-12-02 10:18:14.850 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:15 localhost nova_compute[281854]: 2025-12-02 10:18:15.822 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:15 localhost nova_compute[281854]: 2025-12-02 10:18:15.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.108 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'name': 'test', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005541913.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'hostId': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.109 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.113 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1d5af964-28d1-433f-916a-a31b8791db9c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.109677', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '3760f804-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '268c340db948d721a5d3a62dd0b5031f1ac3ac9658dec4892500d2ed722a2d33'}]}, 'timestamp': '2025-12-02 10:18:16.113885', '_unique_id': '85848d0cfe2a427abd9127e84640232f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.115 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.116 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.116 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c2126dc2-4a02-46d2-80e9-b74d45c0b9b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.116915', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '376184d6-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '223e1273451caa5ee240838c986b36d2d247b9f8cd14976b4bc1cf482a1cb1b6'}]}, 'timestamp': '2025-12-02 10:18:16.117407', '_unique_id': 'ab8a2d9e03f24736bf050272fa2d991e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.118 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.119 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.119 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '31ed3d5c-68b4-4846-b945-f654bc2ccda0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.119596', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '3761ef7a-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '67ad3078d92a3ee0d11e1bd7f53c65227eae86a5af8c6540da1570b4dc78c04c'}]}, 'timestamp': '2025-12-02 10:18:16.120134', '_unique_id': '6758d981b6514c07ac6f18a56f83ec0b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.121 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.122 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.130 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.131 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2e161097-4641-424c-b211-2d050ef56910', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.122355', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '37639fd2-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': 'd9b14cb06cb97952d96b37d8c72e6fd1a96c5bd3f2791958880b6679cb7cc2ed'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.122355', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3763b2c4-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': 'deb56e796fc87f3c74b04c7cb3f515ec8ef0c947004464a16dd34734daf1c56b'}]}, 'timestamp': '2025-12-02 10:18:16.131697', '_unique_id': 'cf7202307f6e4eb79fbd49a26c1ff6aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.133 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.134 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.134 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.134 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f279187-c2fe-461f-b963-4a11b1853004', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.134458', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '37643280-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '758ab3c85345f63d5e8079b5b5a2936a4f92ae84dbd776710c4764243baf6cf9'}]}, 'timestamp': '2025-12-02 10:18:16.134957', '_unique_id': '90dcde9a63194c7c8c0b9fc220414a49'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.135 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.137 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.137 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.137 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5c0f0a78-42ab-4b18-93d4-d2b4d1d46c6f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.137382', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '3764a486-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '4dba3d4871f9ecccae8cfba09d9930bb4ab8d69fe74986b40478fe188c48f01d'}]}, 'timestamp': '2025-12-02 10:18:16.137915', '_unique_id': '77f493025ed4438fbb126500b09d505d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.138 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.139 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.140 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'edb46fe6-95aa-4fad-bf99-3546779087de', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.140023', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '37650a16-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '9a6d2309a33d515f00f80bf8c6eeeb133df124c92cf245b0f8e37e85338df8c4'}]}, 'timestamp': '2025-12-02 10:18:16.140468', '_unique_id': '0ac4685da7994131a26b9a3f8b0887ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.141 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.142 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.142 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.159 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/cpu volume: 20910000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '823d50b8-54d9-4f7a-b2dd-1981959a1109', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20910000000, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:18:16.142872', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '376803e2-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.378402722, 'message_signature': '63626773f9c24823e083ab680b26b3eac2a69e27893bb6b12744a6c1f7ad5ec3'}]}, 'timestamp': '2025-12-02 10:18:16.160042', '_unique_id': 'ccffb23b0c5347dc9695710f84d86705'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.161 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.162 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.188 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9ceedc82-275c-4ebb-aa62-1436c6115e36', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.162372', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '376c6d9c-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': 'bba7b5e4ebb33c53cbbc6f3bb0da9c148562a698928cb2be5cfaf291d6aa154a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.162372', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '376c7fc6-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '8c898dfc08d3d68458b9d7ccbcb1f91955c70d8eb11dc2be6e2e4d277433ab49'}]}, 'timestamp': '2025-12-02 10:18:16.189338', '_unique_id': '26beb9ff54aa442e9e01330395e11aa8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.190 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.191 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.191 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e92d28c7-56a8-49dc-b505-4f442be437a8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.191758', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '376cefec-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '864efdc807605adbf0caef4394a693735a42c6910f87340cc30eece309d72115'}]}, 'timestamp': '2025-12-02 10:18:16.192242', '_unique_id': '7b0e5cdb9d524320a74677e0e991b34d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.193 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.194 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 1962998170 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.194 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.latency volume: 38950418 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'deccbb8f-c0ab-4e61-9de0-c565ed508f20', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1962998170, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.194429', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '376d5bd0-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '0c3cda493dac94d7f950024ce8887a06c4d20d78dbc45f6bc7272ea5f5367acf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 38950418, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.194429', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '376d6b8e-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '883aa2cdfb95837d17a9747e35efc4343cfa76284815ddd033ce5616ef0efca5'}]}, 'timestamp': '2025-12-02 10:18:16.195361', '_unique_id': 'a402e34047da45a79670504cd4239b03'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.196 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.197 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.197 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.198 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e458cbd6-fbc4-45c9-8c29-cd5f6dfe6940', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.197548', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '376dd358-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': 'fa4ae2bbc5e780a02c85e761d4e7cd56323d3ce2207958f5749decb1a4dd2434'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.197548', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '376de35c-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': '5ec30a121133fba7c436ac12851168e07eb0fdd633684f9da2f92e737f983f2a'}]}, 'timestamp': '2025-12-02 10:18:16.198433', '_unique_id': '57954d7d28364ad1975f40b1e56689cb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.199 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.200 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.200 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.201 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2c723b71-6e6a-4469-a5e4-07692e7ff67a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.200719', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '376e4d7e-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '8f3647a950d7d16e6c2a19e8e48ff7cbf718dc213d384dc091d06db4bcd26980'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.200719', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '376e5d50-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '45b0f380df6f615d58bcae7dfa303143149c29ab11d8e44708d85a530a33373c'}]}, 'timestamp': '2025-12-02 10:18:16.201551', '_unique_id': 'a86cddf03e2b4dae8d623cc18b5355f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.202 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.203 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.203 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/memory.usage volume: 51.6328125 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dfb9911c-3b8f-439f-a068-6a7a91da409c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.6328125, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'timestamp': '2025-12-02T10:18:16.203779', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '376ec4de-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.378402722, 'message_signature': '2810913b054a313d5d43263b7b94f5e9474752f033da71e1f020e47aa72103b6'}]}, 'timestamp': '2025-12-02 10:18:16.204218', '_unique_id': '624ed98cc35e42f9aa7fc44daaf6e4b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.205 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.206 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.206 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6654bae5-dde0-4357-8687-a2505192fab8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.206490', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '376f3040-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '55b49840f84bad892cec6dc4a08e42485387d590b14ac13b4848cfcb6877e460'}]}, 'timestamp': '2025-12-02 10:18:16.206985', '_unique_id': '01fdfd73cf164c6c8d97b056e85487f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.207 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.208 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.209 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3ff61568-8176-4168-b406-656dfe002591', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.209096', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '376f9472-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': '80bd2985cd16f20de0b50f677f7ce59f1f1320b2f0f398a6bcdaa67e133dede5'}]}, 'timestamp': '2025-12-02 10:18:16.209547', '_unique_id': '3a6cc3f2bba649f1baa5dbd674e3a32f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.210 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.211 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.211 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.212 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d27014a-ccae-47e3-b5a8-4df83b3badfc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.211959', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3770040c-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': 'd354eb6b3fc739baaed936a94e5d14d4650140842f333ae75064b7024d33d503'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.211959', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '377013b6-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '49cd15f6e2d04df3a6760d1703de2a8113ac35f81e8e380ceb89ef30410365ab'}]}, 'timestamp': '2025-12-02 10:18:16.212831', '_unique_id': 'f0a26cab99a94f1987ea986327da12be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.213 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.214 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.214 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 1807645093 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.215 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.latency volume: 89262124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd86fb09d-1be2-4bac-86d5-9f2c75e07648', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1807645093, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.214934', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3770784c-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': 'f55b946588ce1c12b940a7fd7bb4b1dc5d564653ff474020f92eb565b1ad9158'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 89262124, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.214934', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3770881e-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '981c50451e0055de9794df84c407cf37447998035baab0f2d0fcd584717b1b09'}]}, 'timestamp': '2025-12-02 10:18:16.215786', '_unique_id': '540d4863256549d79a746746c3649795'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.216 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.217 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.217 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.218 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd2dc2ea-27ff-40a5-85ea-8258423541c6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.217906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3770ec3c-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '9b9a1a72759803ef5289d612b348ee7a5881153dd583e67f4c8b55b262dcf81f'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.217906', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3770fbe6-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.381449873, 'message_signature': '34dfd98638bc83176d06d837e668f5e56997fe29fdfb3d12450502120a30386c'}]}, 'timestamp': '2025-12-02 10:18:16.218747', '_unique_id': 'b90b25f23526484da888d48acc5b8eed'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.219 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.220 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.220 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ed24b17-dd61-4d1a-b74e-244e28882882', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'instance-00000002-b254bb7f-2891-4b37-9c44-9700e301ce16-tap4a318f6a-b3', 'timestamp': '2025-12-02T10:18:16.220867', 'resource_metadata': {'display_name': 'test', 'name': 'tap4a318f6a-b3', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:26:b2:03', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap4a318f6a-b3'}, 'message_id': '37716040-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.328757291, 'message_signature': 'e26f1be064159ff383e0b26bea1d915d56d027c6b6f77b42a046111877229b35'}]}, 'timestamp': '2025-12-02 10:18:16.221318', '_unique_id': '8b858fe706e147efa024f71b8535cb97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.222 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.223 12 DEBUG ceilometer.compute.pollsters [-] b254bb7f-2891-4b37-9c44-9700e301ce16/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '92ba1599-b072-4834-a915-4a4595c9c594', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vda', 'timestamp': '2025-12-02T10:18:16.223373', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '3771c1ca-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': '7d21e960c6408d4941010b4712c9a48654a4c7677b12e7fc6c8c1c72b48700d7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'cb8b7d2a63b642aa999db12e17eeb9e4', 'user_name': None, 'project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'project_name': None, 'resource_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16-vdb', 'timestamp': '2025-12-02T10:18:16.223373', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b254bb7f-2891-4b37-9c44-9700e301ce16', 'instance_type': 'm1.small', 'host': '0ad0c9dfa8a31298b6a0f119d9616e77e0b75b76881f2815e01eeeff', 'instance_host': 'np0005541913.localdomain', 'flavor': {'id': '45a99238-6f19-4f9e-be82-6ef3af1dcb31', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa'}, 'image_ref': '6dbbc069-a02f-4753-b6a8-9fb02d7e0aaa', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '3771d2a0-cf68-11f0-a0da-fa163e3f40cc', 'monotonic_time': 12858.341444862, 'message_signature': 'f7e58f091a7878d2181c631115cd67682abed6fa825754cff3017003f9e9f6ca'}]}, 'timestamp': '2025-12-02 10:18:16.224219', '_unique_id': '9d2f29eb8ff64e26993fac56386f8508'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging yield Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 2 05:18:16 localhost ceilometer_agent_compute[238101]: 2025-12-02 10:18:16.225 12 ERROR oslo_messaging.notify.messaging Dec 2 05:18:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:16 localhost nova_compute[281854]: 2025-12-02 10:18:16.585 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:16 localhost nova_compute[281854]: 2025-12-02 10:18:16.587 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:17 localhost nova_compute[281854]: 2025-12-02 10:18:17.826 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:17 localhost nova_compute[281854]: 2025-12-02 10:18:17.899 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:18:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:18:19 localhost podman[335514]: 2025-12-02 10:18:19.439929883 +0000 UTC m=+0.077663242 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:18:19 localhost podman[335515]: 2025-12-02 10:18:19.500512277 +0000 UTC m=+0.134714561 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible) Dec 2 05:18:19 localhost podman[335514]: 2025-12-02 10:18:19.520756039 +0000 UTC m=+0.158489408 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:18:19 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:18:19 localhost podman[335515]: 2025-12-02 10:18:19.54203498 +0000 UTC m=+0.176237284 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:19 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:18:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:21 localhost nova_compute[281854]: 2025-12-02 10:18:21.587 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:21 localhost nova_compute[281854]: 2025-12-02 10:18:21.590 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:26 localhost nova_compute[281854]: 2025-12-02 10:18:26.589 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:26 localhost nova_compute[281854]: 2025-12-02 10:18:26.593 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:31 localhost nova_compute[281854]: 2025-12-02 10:18:31.592 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:31 localhost nova_compute[281854]: 2025-12-02 10:18:31.596 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:32 localhost nova_compute[281854]: 2025-12-02 10:18:32.920 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:32 localhost nova_compute[281854]: 2025-12-02 10:18:32.946 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Triggering sync for uuid b254bb7f-2891-4b37-9c44-9700e301ce16 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 2 05:18:32 localhost nova_compute[281854]: 2025-12-02 10:18:32.947 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "b254bb7f-2891-4b37-9c44-9700e301ce16" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:18:32 localhost nova_compute[281854]: 2025-12-02 10:18:32.947 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:18:32 localhost nova_compute[281854]: 2025-12-02 10:18:32.979 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "b254bb7f-2891-4b37-9c44-9700e301ce16" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:18:34 localhost openstack_network_exporter[242845]: ERROR 10:18:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:18:34 localhost openstack_network_exporter[242845]: ERROR 10:18:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:34 localhost openstack_network_exporter[242845]: ERROR 10:18:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:18:34 localhost openstack_network_exporter[242845]: ERROR 10:18:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:18:34 localhost openstack_network_exporter[242845]: Dec 2 05:18:34 localhost openstack_network_exporter[242845]: ERROR 10:18:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:18:34 localhost openstack_network_exporter[242845]: Dec 2 05:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:18:34 localhost podman[335562]: 2025-12-02 10:18:34.444672046 +0000 UTC m=+0.083556759 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:18:34 localhost podman[335562]: 2025-12-02 10:18:34.485340987 +0000 UTC m=+0.124225630 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 2 05:18:34 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:18:36 localhost podman[240799]: time="2025-12-02T10:18:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:18:36 localhost podman[240799]: @ - - [02/Dec/2025:10:18:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:18:36 localhost podman[240799]: @ - - [02/Dec/2025:10:18:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18778 "" "Go-http-client/1.1" Dec 2 05:18:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:36 localhost nova_compute[281854]: 2025-12-02 10:18:36.595 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:36 localhost nova_compute[281854]: 2025-12-02 10:18:36.598 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:36 localhost nova_compute[281854]: 2025-12-02 10:18:36.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:18:38 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e285 e285: 6 total, 6 up, 6 in Dec 2 05:18:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:18:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:18:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:18:38 localhost podman[335600]: 2025-12-02 10:18:38.380976349 +0000 UTC m=+0.093017223 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9) Dec 2 05:18:38 localhost podman[335600]: 2025-12-02 10:18:38.394786229 +0000 UTC m=+0.106827113 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64) Dec 2 05:18:38 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:18:38 localhost systemd[1]: tmp-crun.NwLMHE.mount: Deactivated successfully. Dec 2 05:18:38 localhost podman[335599]: 2025-12-02 10:18:38.506086412 +0000 UTC m=+0.219694768 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Dec 2 05:18:38 localhost podman[335601]: 2025-12-02 10:18:38.463673876 +0000 UTC m=+0.171887607 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:18:38 localhost podman[335599]: 2025-12-02 10:18:38.540082323 +0000 UTC m=+0.253690669 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 2 05:18:38 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:18:38 localhost podman[335601]: 2025-12-02 10:18:38.597712468 +0000 UTC m=+0.305926199 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 2 05:18:38 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:18:39 localhost systemd[1]: tmp-crun.Q11UaI.mount: Deactivated successfully. Dec 2 05:18:39 localhost podman[335786]: Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.910996292 +0000 UTC m=+0.068114177 container create 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, name=rhceph, io.buildah.version=1.41.4, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 2 05:18:39 localhost systemd[1]: Started libpod-conmon-60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215.scope. Dec 2 05:18:39 localhost systemd[1]: Started libcrun container. Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.879995121 +0000 UTC m=+0.037113026 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.979430945 +0000 UTC m=+0.136548840 container init 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.98629305 +0000 UTC m=+0.143410935 container start 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vcs-type=git, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.986583367 +0000 UTC m=+0.143701292 container attach 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, version=7, GIT_CLEAN=True, release=1763362218, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64) Dec 2 05:18:39 localhost nifty_wescoff[335801]: 167 167 Dec 2 05:18:39 localhost systemd[1]: libpod-60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215.scope: Deactivated successfully. Dec 2 05:18:39 localhost podman[335786]: 2025-12-02 10:18:39.992623699 +0000 UTC m=+0.149741584 container died 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, vendor=Red Hat, Inc., name=rhceph, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, release=1763362218, RELEASE=main) Dec 2 05:18:40 localhost podman[335806]: 2025-12-02 10:18:40.097360485 +0000 UTC m=+0.094970366 container remove 60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_wescoff, version=7, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4) Dec 2 05:18:40 localhost systemd[1]: libpod-conmon-60290afae581d8c2806cc1b5a9949ecd350d690dd7bd82b153af1b8d4d5c3215.scope: Deactivated successfully. Dec 2 05:18:40 localhost podman[335826]: Dec 2 05:18:40 localhost podman[335826]: 2025-12-02 10:18:40.30871559 +0000 UTC m=+0.054185964 container create 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True) Dec 2 05:18:40 localhost systemd[1]: Started libpod-conmon-8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a.scope. Dec 2 05:18:40 localhost systemd[1]: Started libcrun container. Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e11162b740091b70afc3f2008db766c0c671f0cee215768b2b325d296b5a12/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e11162b740091b70afc3f2008db766c0c671f0cee215768b2b325d296b5a12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e11162b740091b70afc3f2008db766c0c671f0cee215768b2b325d296b5a12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40e11162b740091b70afc3f2008db766c0c671f0cee215768b2b325d296b5a12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:40 localhost podman[335826]: 2025-12-02 10:18:40.371555373 +0000 UTC m=+0.117025727 container init 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, ceph=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 2 05:18:40 localhost systemd[1]: var-lib-containers-storage-overlay-090cac4657a350058074a7d962a1af2293d1abf09bc428da65503f358ddf5151-merged.mount: Deactivated successfully. Dec 2 05:18:40 localhost podman[335826]: 2025-12-02 10:18:40.382135367 +0000 UTC m=+0.127605731 container start 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 2 05:18:40 localhost podman[335826]: 2025-12-02 10:18:40.283047091 +0000 UTC m=+0.028517465 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 2 05:18:40 localhost podman[335826]: 2025-12-02 10:18:40.382354763 +0000 UTC m=+0.127825117 container attach 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, build-date=2025-11-26T19:44:28Z, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Dec 2 05:18:41 localhost pedantic_rosalind[335840]: [ Dec 2 05:18:41 localhost pedantic_rosalind[335840]: { Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "available": false, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "ceph_device": false, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "lsm_data": {}, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "lvs": [], Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "path": "/dev/sr0", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "rejected_reasons": [ Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "Insufficient space (<5GB)", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "Has a FileSystem" Dec 2 05:18:41 localhost pedantic_rosalind[335840]: ], Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "sys_api": { Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "actuators": null, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "device_nodes": "sr0", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "human_readable_size": "482.00 KB", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "id_bus": "ata", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "model": "QEMU DVD-ROM", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "nr_requests": "2", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "partitions": {}, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "path": "/dev/sr0", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "removable": "1", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "rev": "2.5+", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "ro": "0", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "rotational": "1", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "sas_address": "", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "sas_device_handle": "", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "scheduler_mode": "mq-deadline", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "sectors": 0, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "sectorsize": "2048", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "size": 493568.0, Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "support_discard": "0", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "type": "disk", Dec 2 05:18:41 localhost pedantic_rosalind[335840]: "vendor": "QEMU" Dec 2 05:18:41 localhost pedantic_rosalind[335840]: } Dec 2 05:18:41 localhost pedantic_rosalind[335840]: } Dec 2 05:18:41 localhost pedantic_rosalind[335840]: ] Dec 2 05:18:41 localhost systemd[1]: libpod-8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a.scope: Deactivated successfully. Dec 2 05:18:41 localhost podman[335826]: 2025-12-02 10:18:41.410764472 +0000 UTC m=+1.156234876 container died 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=) Dec 2 05:18:41 localhost systemd[1]: libpod-8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a.scope: Consumed 1.052s CPU time. Dec 2 05:18:41 localhost systemd[1]: var-lib-containers-storage-overlay-40e11162b740091b70afc3f2008db766c0c671f0cee215768b2b325d296b5a12-merged.mount: Deactivated successfully. Dec 2 05:18:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:41 localhost podman[337882]: 2025-12-02 10:18:41.498560714 +0000 UTC m=+0.077508777 container remove 8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_rosalind, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 2 05:18:41 localhost systemd[1]: libpod-conmon-8952ea8364804d5c266aba65cd0416f5ad97ea75ffd3cb624bf71cf8254d848a.scope: Deactivated successfully. Dec 2 05:18:41 localhost nova_compute[281854]: 2025-12-02 10:18:41.598 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:41 localhost nova_compute[281854]: 2025-12-02 10:18:41.600 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:42 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:18:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:43.558 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:18:43 localhost nova_compute[281854]: 2025-12-02 10:18:43.559 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:43 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:43.560 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 2 05:18:43 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:43.971 263406 INFO neutron.agent.linux.ip_lib [None req-ab550fef-c5d8-4d2f-bcfb-6fe2f68b2dae - - - - - -] Device tap9140f735-04 cannot be used as it has no MAC address#033[00m Dec 2 05:18:43 localhost nova_compute[281854]: 2025-12-02 10:18:43.990 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:43 localhost kernel: device tap9140f735-04 entered promiscuous mode Dec 2 05:18:43 localhost NetworkManager[5965]: [1764670723.9979] manager: (tap9140f735-04): new Generic device (/org/freedesktop/NetworkManager/Devices/94) Dec 2 05:18:43 localhost ovn_controller[154505]: 2025-12-02T10:18:43Z|00612|binding|INFO|Claiming lport 9140f735-04c6-485f-9881-2f09b5b9f68f for this chassis. Dec 2 05:18:43 localhost ovn_controller[154505]: 2025-12-02T10:18:43Z|00613|binding|INFO|9140f735-04c6-485f-9881-2f09b5b9f68f: Claiming unknown Dec 2 05:18:44 localhost nova_compute[281854]: 2025-12-02 10:18:44.003 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:44 localhost systemd-udevd[337925]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:18:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:44.010 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0cdd3535-8c4d-40c0-93c7-242e2392e8fd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cdd3535-8c4d-40c0-93c7-242e2392e8fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4851cba82e304f60b99cf343fa7fcf33', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=138f5691-7516-45ed-9230-1f42c25749a2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9140f735-04c6-485f-9881-2f09b5b9f68f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:18:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:44.012 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9140f735-04c6-485f-9881-2f09b5b9f68f in datapath 0cdd3535-8c4d-40c0-93c7-242e2392e8fd bound to our chassis#033[00m Dec 2 05:18:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:44.014 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port e185d5d5-dabb-4b86-9834-d2cce80c75a1 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:18:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:44.014 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cdd3535-8c4d-40c0-93c7-242e2392e8fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:18:44 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:44.015 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[74b48429-8c8a-4522-acd6-2456ab7aae20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost ovn_controller[154505]: 2025-12-02T10:18:44Z|00614|binding|INFO|Setting lport 9140f735-04c6-485f-9881-2f09b5b9f68f ovn-installed in OVS Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost ovn_controller[154505]: 2025-12-02T10:18:44Z|00615|binding|INFO|Setting lport 9140f735-04c6-485f-9881-2f09b5b9f68f up in Southbound Dec 2 05:18:44 localhost nova_compute[281854]: 2025-12-02 10:18:44.041 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost journal[230136]: ethtool ioctl error on tap9140f735-04: No such device Dec 2 05:18:44 localhost nova_compute[281854]: 2025-12-02 10:18:44.083 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:44 localhost nova_compute[281854]: 2025-12-02 10:18:44.116 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:45 localhost podman[337997]: Dec 2 05:18:45 localhost podman[337997]: 2025-12-02 10:18:45.076757413 +0000 UTC m=+0.088332708 container create ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 2 05:18:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:18:45 localhost systemd[1]: Started libpod-conmon-ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960.scope. Dec 2 05:18:45 localhost systemd[1]: Started libcrun container. Dec 2 05:18:45 localhost podman[337997]: 2025-12-02 10:18:45.033703309 +0000 UTC m=+0.045278614 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:18:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9db6d3e7a07fe8276e0bd707c870a01daa8479ca21632c914dfb215bb86c31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:45 localhost podman[337997]: 2025-12-02 10:18:45.144058557 +0000 UTC m=+0.155633852 container init ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 2 05:18:45 localhost podman[337997]: 2025-12-02 10:18:45.158271978 +0000 UTC m=+0.169847243 container start ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true) Dec 2 05:18:45 localhost dnsmasq[338026]: started, version 2.85 cachesize 150 Dec 2 05:18:45 localhost dnsmasq[338026]: DNS service limited to local subnets Dec 2 05:18:45 localhost dnsmasq[338026]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:18:45 localhost dnsmasq[338026]: warning: no upstream servers configured Dec 2 05:18:45 localhost dnsmasq-dhcp[338026]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:18:45 localhost dnsmasq[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/addn_hosts - 0 addresses Dec 2 05:18:45 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/host Dec 2 05:18:45 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/opts Dec 2 05:18:45 localhost nova_compute[281854]: 2025-12-02 10:18:45.196 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:45 localhost podman[338011]: 2025-12-02 10:18:45.20503137 +0000 UTC m=+0.091684177 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:18:45 localhost podman[338011]: 2025-12-02 10:18:45.219062466 +0000 UTC m=+0.105715333 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd) Dec 2 05:18:45 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:18:45 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:45.524 263406 INFO neutron.agent.dhcp.agent [None req-008bc975-25ef-4225-9ffa-d75471c11a31 - - - - - -] DHCP configuration for ports {'abfe3dd7-1bf6-404b-9299-6e689403d360'} is completed#033[00m Dec 2 05:18:46 localhost systemd[1]: tmp-crun.L2Z3T9.mount: Deactivated successfully. Dec 2 05:18:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:46 localhost nova_compute[281854]: 2025-12-02 10:18:46.627 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:46 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:46.850 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:46Z, description=, device_id=4ee86722-429a-4a47-a912-a41ad8c5f9ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6b7859fa-a1af-428d-a1af-07ce1483f5d6, ip_allocation=immediate, mac_address=fa:16:3e:07:2b:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:18:40Z, description=, dns_domain=, id=0cdd3535-8c4d-40c0-93c7-242e2392e8fd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-917464313-network, port_security_enabled=True, project_id=4851cba82e304f60b99cf343fa7fcf33, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13608, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3934, status=ACTIVE, subnets=['8bc1674a-6256-4eeb-bfe9-12248911570d'], tags=[], tenant_id=4851cba82e304f60b99cf343fa7fcf33, updated_at=2025-12-02T10:18:42Z, vlan_transparent=None, network_id=0cdd3535-8c4d-40c0-93c7-242e2392e8fd, port_security_enabled=False, project_id=4851cba82e304f60b99cf343fa7fcf33, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3952, status=DOWN, tags=[], tenant_id=4851cba82e304f60b99cf343fa7fcf33, updated_at=2025-12-02T10:18:46Z on network 0cdd3535-8c4d-40c0-93c7-242e2392e8fd#033[00m Dec 2 05:18:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 e286: 6 total, 6 up, 6 in Dec 2 05:18:47 localhost dnsmasq[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/addn_hosts - 1 addresses Dec 2 05:18:47 localhost podman[338052]: 2025-12-02 10:18:47.048789899 +0000 UTC m=+0.059426494 container kill ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 2 05:18:47 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/host Dec 2 05:18:47 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/opts Dec 2 05:18:47 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:47.603 263406 INFO neutron.agent.dhcp.agent [None req-3cabd811-42e4-4a49-b992-42a470152028 - - - - - -] DHCP configuration for ports {'6b7859fa-a1af-428d-a1af-07ce1483f5d6'} is completed#033[00m Dec 2 05:18:48 localhost nova_compute[281854]: 2025-12-02 10:18:48.164 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:48.333 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:46Z, description=, device_id=4ee86722-429a-4a47-a912-a41ad8c5f9ac, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6b7859fa-a1af-428d-a1af-07ce1483f5d6, ip_allocation=immediate, mac_address=fa:16:3e:07:2b:ea, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:18:40Z, description=, dns_domain=, id=0cdd3535-8c4d-40c0-93c7-242e2392e8fd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-917464313-network, port_security_enabled=True, project_id=4851cba82e304f60b99cf343fa7fcf33, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13608, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3934, status=ACTIVE, subnets=['8bc1674a-6256-4eeb-bfe9-12248911570d'], tags=[], tenant_id=4851cba82e304f60b99cf343fa7fcf33, updated_at=2025-12-02T10:18:42Z, vlan_transparent=None, network_id=0cdd3535-8c4d-40c0-93c7-242e2392e8fd, port_security_enabled=False, project_id=4851cba82e304f60b99cf343fa7fcf33, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3952, status=DOWN, tags=[], tenant_id=4851cba82e304f60b99cf343fa7fcf33, updated_at=2025-12-02T10:18:46Z on network 0cdd3535-8c4d-40c0-93c7-242e2392e8fd#033[00m Dec 2 05:18:48 localhost systemd[1]: tmp-crun.Yqhsha.mount: Deactivated successfully. Dec 2 05:18:48 localhost dnsmasq[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/addn_hosts - 1 addresses Dec 2 05:18:48 localhost podman[338089]: 2025-12-02 10:18:48.548764375 +0000 UTC m=+0.066827462 container kill ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 2 05:18:48 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/host Dec 2 05:18:48 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/opts Dec 2 05:18:48 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:48.775 263406 INFO neutron.agent.dhcp.agent [None req-769510e8-c3f6-40b8-ae06-321b62bac232 - - - - - -] DHCP configuration for ports {'6b7859fa-a1af-428d-a1af-07ce1483f5d6'} is completed#033[00m Dec 2 05:18:49 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:49.563 160221 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 2 05:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:18:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:18:50 localhost systemd[1]: tmp-crun.ud3qdb.mount: Deactivated successfully. Dec 2 05:18:50 localhost podman[338109]: 2025-12-02 10:18:50.450105738 +0000 UTC m=+0.092369786 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 2 05:18:50 localhost podman[338110]: 2025-12-02 10:18:50.493293725 +0000 UTC m=+0.130206060 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:18:50 localhost podman[338109]: 2025-12-02 10:18:50.515248294 +0000 UTC m=+0.157512322 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:18:50 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:18:50 localhost podman[338110]: 2025-12-02 10:18:50.56324074 +0000 UTC m=+0.200153075 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:18:50 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:18:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:51 localhost nova_compute[281854]: 2025-12-02 10:18:51.630 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:52 localhost nova_compute[281854]: 2025-12-02 10:18:52.255 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:54.019 263406 INFO neutron.agent.linux.ip_lib [None req-480099a4-b06a-407c-b1d0-f50c3ff6b5ad - - - - - -] Device tap28b38d56-a5 cannot be used as it has no MAC address#033[00m Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.045 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost kernel: device tap28b38d56-a5 entered promiscuous mode Dec 2 05:18:54 localhost NetworkManager[5965]: [1764670734.0540] manager: (tap28b38d56-a5): new Generic device (/org/freedesktop/NetworkManager/Devices/95) Dec 2 05:18:54 localhost ovn_controller[154505]: 2025-12-02T10:18:54Z|00616|binding|INFO|Claiming lport 28b38d56-a5a9-424b-abbe-f0fde1476653 for this chassis. Dec 2 05:18:54 localhost ovn_controller[154505]: 2025-12-02T10:18:54Z|00617|binding|INFO|28b38d56-a5a9-424b-abbe-f0fde1476653: Claiming unknown Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.055 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost systemd-udevd[338168]: Network interface NamePolicy= disabled on kernel command line. Dec 2 05:18:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:54.064 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6279c547e4d448d29e2a37d1c9d24474', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8508df3b-5b28-4906-91b4-ce2a2aa7f0ab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=28b38d56-a5a9-424b-abbe-f0fde1476653) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:18:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:54.066 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 28b38d56-a5a9-424b-abbe-f0fde1476653 in datapath fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3 bound to our chassis#033[00m Dec 2 05:18:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:54.068 160221 DEBUG neutron.agent.ovn.metadata.agent [-] Port c89bb200-796c-4c7a-b886-ca81a9ccf118 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 2 05:18:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:54.071 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:18:54 localhost ovn_metadata_agent[160216]: 2025-12-02 10:18:54.073 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2fe1f7-4e27-4aac-8ae4-ace9576d140c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:18:54 localhost ovn_controller[154505]: 2025-12-02T10:18:54Z|00618|binding|INFO|Setting lport 28b38d56-a5a9-424b-abbe-f0fde1476653 ovn-installed in OVS Dec 2 05:18:54 localhost ovn_controller[154505]: 2025-12-02T10:18:54Z|00619|binding|INFO|Setting lport 28b38d56-a5a9-424b-abbe-f0fde1476653 up in Southbound Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.080 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.106 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.145 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:54 localhost nova_compute[281854]: 2025-12-02 10:18:54.182 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:55 localhost podman[338224]: Dec 2 05:18:55 localhost podman[338224]: 2025-12-02 10:18:55.088716603 +0000 UTC m=+0.091653548 container create b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 2 05:18:55 localhost systemd[1]: Started libpod-conmon-b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a.scope. Dec 2 05:18:55 localhost podman[338224]: 2025-12-02 10:18:55.044846207 +0000 UTC m=+0.047783162 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 2 05:18:55 localhost systemd[1]: tmp-crun.2UdlBg.mount: Deactivated successfully. Dec 2 05:18:55 localhost systemd[1]: Started libcrun container. Dec 2 05:18:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b42545da9579ee6141692819c2092ba10b87d18c3395d989ce008cca75dda0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 2 05:18:55 localhost podman[338224]: 2025-12-02 10:18:55.190292984 +0000 UTC m=+0.193229949 container init b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:55 localhost podman[338224]: 2025-12-02 10:18:55.203272462 +0000 UTC m=+0.206209417 container start b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 2 05:18:55 localhost dnsmasq[338242]: started, version 2.85 cachesize 150 Dec 2 05:18:55 localhost dnsmasq[338242]: DNS service limited to local subnets Dec 2 05:18:55 localhost dnsmasq[338242]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 2 05:18:55 localhost dnsmasq[338242]: warning: no upstream servers configured Dec 2 05:18:55 localhost dnsmasq-dhcp[338242]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 2 05:18:55 localhost dnsmasq[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/addn_hosts - 0 addresses Dec 2 05:18:55 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/host Dec 2 05:18:55 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/opts Dec 2 05:18:55 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:55.514 263406 INFO neutron.agent.dhcp.agent [None req-d6ab4103-db52-4cd5-ab0e-c9cbd42626e1 - - - - - -] DHCP configuration for ports {'e4cd5106-3442-4e72-80e4-d3955ed089d9'} is completed#033[00m Dec 2 05:18:55 localhost nova_compute[281854]: 2025-12-02 10:18:55.902 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:56 localhost systemd[1]: tmp-crun.YxELvL.mount: Deactivated successfully. Dec 2 05:18:56 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:18:56 localhost nova_compute[281854]: 2025-12-02 10:18:56.632 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:57.172 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:56Z, description=, device_id=146339f0-3e49-419f-a49c-241664c75695, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=426df494-1891-4082-a33e-37b61ae617b3, ip_allocation=immediate, mac_address=fa:16:3e:70:cc:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:18:51Z, description=, dns_domain=, id=fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1848591313-network, port_security_enabled=True, project_id=6279c547e4d448d29e2a37d1c9d24474, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13957, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3960, status=ACTIVE, subnets=['b5ae3d95-a8a3-4b1f-84fe-fbd5b83bc30c'], tags=[], tenant_id=6279c547e4d448d29e2a37d1c9d24474, updated_at=2025-12-02T10:18:52Z, vlan_transparent=None, network_id=fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, port_security_enabled=False, project_id=6279c547e4d448d29e2a37d1c9d24474, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3968, status=DOWN, tags=[], tenant_id=6279c547e4d448d29e2a37d1c9d24474, updated_at=2025-12-02T10:18:56Z on network fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3#033[00m Dec 2 05:18:57 localhost dnsmasq[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/addn_hosts - 1 addresses Dec 2 05:18:57 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/host Dec 2 05:18:57 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/opts Dec 2 05:18:57 localhost podman[338260]: 2025-12-02 10:18:57.36569564 +0000 UTC m=+0.044661298 container kill b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:18:57 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:57.662 263406 INFO neutron.agent.dhcp.agent [None req-4aa9b142-ab2f-4653-bd32-338fd27a0360 - - - - - -] DHCP configuration for ports {'426df494-1891-4082-a33e-37b61ae617b3'} is completed#033[00m Dec 2 05:18:57 localhost ovn_controller[154505]: 2025-12-02T10:18:57Z|00620|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:18:57 localhost nova_compute[281854]: 2025-12-02 10:18:57.773 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:18:58 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:58.743 263406 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:56Z, description=, device_id=146339f0-3e49-419f-a49c-241664c75695, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=426df494-1891-4082-a33e-37b61ae617b3, ip_allocation=immediate, mac_address=fa:16:3e:70:cc:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:18:51Z, description=, dns_domain=, id=fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1848591313-network, port_security_enabled=True, project_id=6279c547e4d448d29e2a37d1c9d24474, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=13957, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3960, status=ACTIVE, subnets=['b5ae3d95-a8a3-4b1f-84fe-fbd5b83bc30c'], tags=[], tenant_id=6279c547e4d448d29e2a37d1c9d24474, updated_at=2025-12-02T10:18:52Z, vlan_transparent=None, network_id=fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, port_security_enabled=False, project_id=6279c547e4d448d29e2a37d1c9d24474, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3968, status=DOWN, tags=[], tenant_id=6279c547e4d448d29e2a37d1c9d24474, updated_at=2025-12-02T10:18:56Z on network fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3#033[00m Dec 2 05:18:58 localhost dnsmasq[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/addn_hosts - 1 addresses Dec 2 05:18:58 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/host Dec 2 05:18:58 localhost podman[338297]: 2025-12-02 10:18:58.982126797 +0000 UTC m=+0.059231178 container kill b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:18:58 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/opts Dec 2 05:18:59 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:18:59.261 263406 INFO neutron.agent.dhcp.agent [None req-add556c9-0271-44b2-ab4d-946b2f4e74f3 - - - - - -] DHCP configuration for ports {'426df494-1891-4082-a33e-37b61ae617b3'} is completed#033[00m Dec 2 05:19:01 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:01 localhost nova_compute[281854]: 2025-12-02 10:19:01.683 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:01 localhost dnsmasq[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/addn_hosts - 0 addresses Dec 2 05:19:01 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/host Dec 2 05:19:01 localhost podman[338334]: 2025-12-02 10:19:01.857666096 +0000 UTC m=+0.037897367 container kill ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2) Dec 2 05:19:01 localhost dnsmasq-dhcp[338026]: read /var/lib/neutron/dhcp/0cdd3535-8c4d-40c0-93c7-242e2392e8fd/opts Dec 2 05:19:02 localhost ovn_controller[154505]: 2025-12-02T10:19:02Z|00621|binding|INFO|Releasing lport 9140f735-04c6-485f-9881-2f09b5b9f68f from this chassis (sb_readonly=0) Dec 2 05:19:02 localhost ovn_controller[154505]: 2025-12-02T10:19:02Z|00622|binding|INFO|Setting lport 9140f735-04c6-485f-9881-2f09b5b9f68f down in Southbound Dec 2 05:19:02 localhost kernel: device tap9140f735-04 left promiscuous mode Dec 2 05:19:02 localhost nova_compute[281854]: 2025-12-02 10:19:02.004 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:02.013 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-0cdd3535-8c4d-40c0-93c7-242e2392e8fd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cdd3535-8c4d-40c0-93c7-242e2392e8fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4851cba82e304f60b99cf343fa7fcf33', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=138f5691-7516-45ed-9230-1f42c25749a2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9140f735-04c6-485f-9881-2f09b5b9f68f) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:19:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:02.015 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 9140f735-04c6-485f-9881-2f09b5b9f68f in datapath 0cdd3535-8c4d-40c0-93c7-242e2392e8fd unbound from our chassis#033[00m Dec 2 05:19:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:02.017 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cdd3535-8c4d-40c0-93c7-242e2392e8fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:19:02 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:02.018 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[341e0de9-de03-4b36-b03d-8e929dd9a3ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:19:02 localhost nova_compute[281854]: 2025-12-02 10:19:02.035 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:03.062 160221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:03.062 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:03 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:03.063 160221 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:03 localhost ovn_controller[154505]: 2025-12-02T10:19:03Z|00623|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:19:03 localhost nova_compute[281854]: 2025-12-02 10:19:03.916 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:04 localhost openstack_network_exporter[242845]: ERROR 10:19:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:04 localhost openstack_network_exporter[242845]: ERROR 10:19:04 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:19:04 localhost openstack_network_exporter[242845]: ERROR 10:19:04 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:04 localhost openstack_network_exporter[242845]: ERROR 10:19:04 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:19:04 localhost openstack_network_exporter[242845]: Dec 2 05:19:04 localhost openstack_network_exporter[242845]: ERROR 10:19:04 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:19:04 localhost openstack_network_exporter[242845]: Dec 2 05:19:04 localhost dnsmasq[338026]: exiting on receipt of SIGTERM Dec 2 05:19:04 localhost systemd[1]: libpod-ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960.scope: Deactivated successfully. Dec 2 05:19:04 localhost podman[338374]: 2025-12-02 10:19:04.307713231 +0000 UTC m=+0.055716853 container kill ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:19:04 localhost podman[338388]: 2025-12-02 10:19:04.388852676 +0000 UTC m=+0.067422508 container died ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:19:04 localhost systemd[1]: tmp-crun.I5tcTS.mount: Deactivated successfully. Dec 2 05:19:04 localhost podman[338388]: 2025-12-02 10:19:04.434810388 +0000 UTC m=+0.113380190 container cleanup ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 2 05:19:04 localhost systemd[1]: libpod-conmon-ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960.scope: Deactivated successfully. Dec 2 05:19:04 localhost podman[338390]: 2025-12-02 10:19:04.516808475 +0000 UTC m=+0.186409086 container remove ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0cdd3535-8c4d-40c0-93c7-242e2392e8fd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 2 05:19:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:19:04.538 263406 INFO neutron.agent.dhcp.agent [None req-f176ccad-6058-4814-876f-c7ca3d01f249 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:19:04 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:19:04.644 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:19:04 localhost nova_compute[281854]: 2025-12-02 10:19:04.837 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:04 localhost nova_compute[281854]: 2025-12-02 10:19:04.838 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:04 localhost nova_compute[281854]: 2025-12-02 10:19:04.839 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 2 05:19:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:19:05 localhost podman[338416]: 2025-12-02 10:19:05.182680919 +0000 UTC m=+0.070786408 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 2 05:19:05 localhost podman[338416]: 2025-12-02 10:19:05.193968691 +0000 UTC m=+0.082074160 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 2 05:19:05 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:19:05 localhost systemd[1]: var-lib-containers-storage-overlay-8f9db6d3e7a07fe8276e0bd707c870a01daa8479ca21632c914dfb215bb86c31-merged.mount: Deactivated successfully. Dec 2 05:19:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab0b2512ad46534a946df6c0a9cadbaf00d75a289804d3b3815d3b3b0f2c9960-userdata-shm.mount: Deactivated successfully. Dec 2 05:19:05 localhost systemd[1]: run-netns-qdhcp\x2d0cdd3535\x2d8c4d\x2d40c0\x2d93c7\x2d242e2392e8fd.mount: Deactivated successfully. Dec 2 05:19:05 localhost dnsmasq[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/addn_hosts - 0 addresses Dec 2 05:19:05 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/host Dec 2 05:19:05 localhost dnsmasq-dhcp[338242]: read /var/lib/neutron/dhcp/fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3/opts Dec 2 05:19:05 localhost podman[338452]: 2025-12-02 10:19:05.376721038 +0000 UTC m=+0.064888799 container kill b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 2 05:19:05 localhost ovn_controller[154505]: 2025-12-02T10:19:05Z|00624|binding|INFO|Releasing lport 28b38d56-a5a9-424b-abbe-f0fde1476653 from this chassis (sb_readonly=0) Dec 2 05:19:05 localhost ovn_controller[154505]: 2025-12-02T10:19:05Z|00625|binding|INFO|Setting lport 28b38d56-a5a9-424b-abbe-f0fde1476653 down in Southbound Dec 2 05:19:05 localhost kernel: device tap28b38d56-a5 left promiscuous mode Dec 2 05:19:05 localhost nova_compute[281854]: 2025-12-02 10:19:05.847 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:05.855 160221 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541913.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8b822d43-1849-5d2e-aa2f-5f185b27d539-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6279c547e4d448d29e2a37d1c9d24474', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8508df3b-5b28-4906-91b4-ce2a2aa7f0ab, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=28b38d56-a5a9-424b-abbe-f0fde1476653) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 2 05:19:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:05.857 160221 INFO neutron.agent.ovn.metadata.agent [-] Port 28b38d56-a5a9-424b-abbe-f0fde1476653 in datapath fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3 unbound from our chassis#033[00m Dec 2 05:19:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:05.859 160221 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 2 05:19:05 localhost ovn_metadata_agent[160216]: 2025-12-02 10:19:05.860 160340 DEBUG oslo.privsep.daemon [-] privsep: reply[539a82a6-88e8-4b37-95df-dcc9cb41645a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 2 05:19:05 localhost nova_compute[281854]: 2025-12-02 10:19:05.863 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:06 localhost podman[240799]: time="2025-12-02T10:19:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:19:06 localhost podman[240799]: @ - - [02/Dec/2025:10:19:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156107 "" "Go-http-client/1.1" Dec 2 05:19:06 localhost podman[240799]: @ - - [02/Dec/2025:10:19:06 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19256 "" "Go-http-client/1.1" Dec 2 05:19:06 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.682 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.688 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.842 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.842 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.842 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.842 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Auditing locally available compute resources for np0005541913.localdomain (node: np0005541913.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.842 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:19:06 localhost ovn_controller[154505]: 2025-12-02T10:19:06Z|00626|binding|INFO|Releasing lport d6e7da3f-8574-49e0-8ba1-2f642b3cec92 from this chassis (sb_readonly=0) Dec 2 05:19:06 localhost nova_compute[281854]: 2025-12-02 10:19:06.951 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:07 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:19:07 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3253900841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.287 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:19:07 localhost dnsmasq[338242]: exiting on receipt of SIGTERM Dec 2 05:19:07 localhost podman[338512]: 2025-12-02 10:19:07.342733894 +0000 UTC m=+0.063612756 container kill b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 2 05:19:07 localhost systemd[1]: libpod-b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a.scope: Deactivated successfully. Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.351 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.352 281858 DEBUG nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 2 05:19:07 localhost podman[338530]: 2025-12-02 10:19:07.410309654 +0000 UTC m=+0.051314725 container died b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 2 05:19:07 localhost systemd[1]: tmp-crun.xS7qRZ.mount: Deactivated successfully. Dec 2 05:19:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a-userdata-shm.mount: Deactivated successfully. Dec 2 05:19:07 localhost podman[338530]: 2025-12-02 10:19:07.454753536 +0000 UTC m=+0.095758537 container remove b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fd87b2bf-bab3-461e-aa7f-9d5a4bfb5ae3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 2 05:19:07 localhost systemd[1]: libpod-conmon-b7bbc8332b03035553a1b35fea11ac5e5df59c880963be6b1ed6c2a308de237a.scope: Deactivated successfully. Dec 2 05:19:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:19:07.496 263406 INFO neutron.agent.dhcp.agent [None req-e34c15f8-58b0-44d6-aa04-fdcf0504adc4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.530 281858 WARNING nova.virt.libvirt.driver [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.533 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Hypervisor/Node resource view: name=np0005541913.localdomain free_ram=11023MB free_disk=41.837013244628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.533 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.534 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 2 05:19:07 localhost neutron_dhcp_agent[263402]: 2025-12-02 10:19:07.629 263406 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.714 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Instance b254bb7f-2891-4b37-9c44-9700e301ce16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.715 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.716 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Final resource view: name=np0005541913.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.730 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing inventories for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.748 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating ProviderTree inventory for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.749 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Updating inventory in ProviderTree for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.760 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing aggregate associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.781 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Refreshing trait associations for resource provider c79215b2-6762-4f7f-a322-f44db2b0b9bd, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SOCKET_PCI_NUMA_AFFINITY _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 2 05:19:07 localhost nova_compute[281854]: 2025-12-02 10:19:07.819 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 2 05:19:08 localhost ceph-mon[298296]: mon.np0005541913@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 2 05:19:08 localhost ceph-mon[298296]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4109271996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 2 05:19:08 localhost nova_compute[281854]: 2025-12-02 10:19:08.247 281858 DEBUG oslo_concurrency.processutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 2 05:19:08 localhost nova_compute[281854]: 2025-12-02 10:19:08.254 281858 DEBUG nova.compute.provider_tree [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed in ProviderTree for provider: c79215b2-6762-4f7f-a322-f44db2b0b9bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 2 05:19:08 localhost nova_compute[281854]: 2025-12-02 10:19:08.269 281858 DEBUG nova.scheduler.client.report [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Inventory has not changed for provider c79215b2-6762-4f7f-a322-f44db2b0b9bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 2 05:19:08 localhost nova_compute[281854]: 2025-12-02 10:19:08.272 281858 DEBUG nova.compute.resource_tracker [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Compute_service record updated for np0005541913.localdomain:np0005541913.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 2 05:19:08 localhost nova_compute[281854]: 2025-12-02 10:19:08.273 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 2 05:19:08 localhost systemd[1]: var-lib-containers-storage-overlay-99b42545da9579ee6141692819c2092ba10b87d18c3395d989ce008cca75dda0-merged.mount: Deactivated successfully. Dec 2 05:19:08 localhost systemd[1]: run-netns-qdhcp\x2dfd87b2bf\x2dbab3\x2d461e\x2daa7f\x2d9d5a4bfb5ae3.mount: Deactivated successfully. Dec 2 05:19:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:19:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:19:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:19:09 localhost podman[338582]: 2025-12-02 10:19:09.210033333 +0000 UTC m=+0.115794414 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:19:09 localhost podman[338582]: 2025-12-02 10:19:09.218420038 +0000 UTC m=+0.124181119 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 2 05:19:09 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:19:09 localhost podman[338577]: 2025-12-02 10:19:09.182998909 +0000 UTC m=+0.089527590 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Dec 2 05:19:09 localhost podman[338576]: 2025-12-02 10:19:09.161519443 +0000 UTC m=+0.075663688 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 2 05:19:09 localhost podman[338577]: 2025-12-02 10:19:09.267160205 +0000 UTC m=+0.173688836 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350) Dec 2 05:19:09 localhost nova_compute[281854]: 2025-12-02 10:19:09.274 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:09 localhost nova_compute[281854]: 2025-12-02 10:19:09.275 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 2 05:19:09 localhost nova_compute[281854]: 2025-12-02 10:19:09.275 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 2 05:19:09 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:19:09 localhost podman[338576]: 2025-12-02 10:19:09.295191216 +0000 UTC m=+0.209335431 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:19:09 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.379 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquiring lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.380 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Acquired lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.381 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.382 281858 DEBUG nova.objects.instance [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Lazy-loading 'info_cache' on Instance uuid b254bb7f-2891-4b37-9c44-9700e301ce16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.983 281858 DEBUG nova.network.neutron [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updating instance_info_cache with network_info: [{"id": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "address": "fa:16:3e:26:b2:03", "network": {"id": "595e1c9b-709c-41d2-9212-0b18b13291a8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.102", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "e2d97696ab6749899bb8ba5ce29a3de2", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a318f6a-b3", "ovs_interfaceid": "4a318f6a-b3c1-4690-8246-f7d046ccd64a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.998 281858 DEBUG oslo_concurrency.lockutils [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Releasing lock "refresh_cache-b254bb7f-2891-4b37-9c44-9700e301ce16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 2 05:19:10 localhost nova_compute[281854]: 2025-12-02 10:19:10.999 281858 DEBUG nova.compute.manager [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] [instance: b254bb7f-2891-4b37-9c44-9700e301ce16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 2 05:19:11 localhost nova_compute[281854]: 2025-12-02 10:19:11.000 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:11 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:11 localhost nova_compute[281854]: 2025-12-02 10:19:11.684 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:11 localhost nova_compute[281854]: 2025-12-02 10:19:11.690 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:11 localhost nova_compute[281854]: 2025-12-02 10:19:11.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:19:15 localhost podman[338636]: 2025-12-02 10:19:15.443402895 +0000 UTC m=+0.083492839 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2) Dec 2 05:19:15 localhost podman[338636]: 2025-12-02 10:19:15.456035773 +0000 UTC m=+0.096125697 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3) Dec 2 05:19:15 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:19:15 localhost nova_compute[281854]: 2025-12-02 10:19:15.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:16 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:16 localhost nova_compute[281854]: 2025-12-02 10:19:16.687 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:16 localhost nova_compute[281854]: 2025-12-02 10:19:16.692 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:17 localhost nova_compute[281854]: 2025-12-02 10:19:17.823 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:18 localhost nova_compute[281854]: 2025-12-02 10:19:18.827 281858 DEBUG oslo_service.periodic_task [None req-023d8683-8286-4dd2-80e5-5e768347515b - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 2 05:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:19:21 localhost systemd[1]: tmp-crun.r06tR2.mount: Deactivated successfully. Dec 2 05:19:21 localhost podman[338655]: 2025-12-02 10:19:21.447211293 +0000 UTC m=+0.089819658 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:19:21 localhost podman[338655]: 2025-12-02 10:19:21.45567284 +0000 UTC m=+0.098281185 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 2 05:19:21 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:19:21 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:21 localhost podman[338656]: 2025-12-02 10:19:21.546691378 +0000 UTC m=+0.185597824 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:19:21 localhost podman[338656]: 2025-12-02 10:19:21.608037992 +0000 UTC m=+0.246944448 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible) Dec 2 05:19:21 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:19:21 localhost nova_compute[281854]: 2025-12-02 10:19:21.690 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:26 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.696 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.698 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.698 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.698 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:26 localhost nova_compute[281854]: 2025-12-02 10:19:26.719 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:31 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.720 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.722 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.723 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.723 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.761 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:31 localhost nova_compute[281854]: 2025-12-02 10:19:31.762 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:34 localhost openstack_network_exporter[242845]: ERROR 10:19:34 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 2 05:19:34 localhost openstack_network_exporter[242845]: ERROR 10:19:34 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 2 05:19:34 localhost openstack_network_exporter[242845]: Dec 2 05:19:34 localhost openstack_network_exporter[242845]: ERROR 10:19:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:34 localhost openstack_network_exporter[242845]: ERROR 10:19:34 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 2 05:19:34 localhost openstack_network_exporter[242845]: ERROR 10:19:34 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 2 05:19:34 localhost openstack_network_exporter[242845]: Dec 2 05:19:35 localhost sshd[338703]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:19:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563. Dec 2 05:19:35 localhost systemd-logind[757]: New session 75 of user zuul. Dec 2 05:19:35 localhost systemd[1]: Started Session 75 of User zuul. Dec 2 05:19:35 localhost podman[338705]: 2025-12-02 10:19:35.338784717 +0000 UTC m=+0.086276973 container health_status 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 2 05:19:35 localhost podman[338705]: 2025-12-02 10:19:35.379902058 +0000 UTC m=+0.127394294 container exec_died 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 2 05:19:35 localhost systemd[1]: 31391dff098ec12c4407be6c4a42b061b76ff5e7040461487a3665b0548d6563.service: Deactivated successfully. Dec 2 05:19:35 localhost python3[338743]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-cf35-eb71-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 2 05:19:36 localhost podman[240799]: time="2025-12-02T10:19:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 2 05:19:36 localhost podman[240799]: @ - - [02/Dec/2025:10:19:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154283 "" "Go-http-client/1.1" Dec 2 05:19:36 localhost podman[240799]: @ - - [02/Dec/2025:10:19:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18789 "" "Go-http-client/1.1" Dec 2 05:19:36 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.763 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.797 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.798 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5035 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.798 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.800 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:36 localhost nova_compute[281854]: 2025-12-02 10:19:36.802 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.070946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777070977, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1478, "num_deletes": 253, "total_data_size": 2573017, "memory_usage": 2646480, "flush_reason": "Manual Compaction"} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777079486, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 1691608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38705, "largest_seqno": 40178, "table_properties": {"data_size": 1685803, "index_size": 3083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13812, "raw_average_key_size": 21, "raw_value_size": 1673636, "raw_average_value_size": 2570, "num_data_blocks": 131, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670681, "oldest_key_time": 1764670681, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8564 microseconds, and 2632 cpu microseconds. Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.079510) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 1691608 bytes OK Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.079523) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081204) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081214) EVENT_LOG_v1 {"time_micros": 1764670777081211, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2565946, prev total WAL file size 2565946, number of live WAL files 2. Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(1651KB)], [66(17MB)] Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777081761, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 20383086, "oldest_snapshot_seqno": -1} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14676 keys, 19065069 bytes, temperature: kUnknown Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777186691, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 19065069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18980124, "index_size": 47148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36741, "raw_key_size": 393571, "raw_average_key_size": 26, "raw_value_size": 18729844, "raw_average_value_size": 1276, "num_data_blocks": 1761, "num_entries": 14676, "num_filter_entries": 14676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669385, "oldest_key_time": 0, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2b5a5119-a77e-4ac2-8a7c-136bbfa56c89", "db_session_id": "7NRXCK2K9UGWEPQBYWTV", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187131) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 19065069 bytes Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.189276) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.0 rd, 181.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 17.8 +0.0 blob) out(18.2 +0.0 blob), read-write-amplify(23.3) write-amplify(11.3) OK, records in: 15212, records dropped: 536 output_compression: NoCompression Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.189314) EVENT_LOG_v1 {"time_micros": 1764670777189297, "job": 40, "event": "compaction_finished", "compaction_time_micros": 105071, "compaction_time_cpu_micros": 48891, "output_level": 6, "num_output_files": 1, "total_output_size": 19065069, "num_input_records": 15212, "num_output_records": 14676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777190012, "job": 40, "event": "table_file_deletion", "file_number": 68} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541913/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777193769, "job": 40, "event": "table_file_deletion", "file_number": 66} Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.193968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.193977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.193980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.193983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ceph-mon[298296]: rocksdb: (Original Log Time 2025/12/02-10:19:37.193986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 2 05:19:37 localhost ovn_controller[154505]: 2025-12-02T10:19:37Z|00627|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory Dec 2 05:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb. Dec 2 05:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2. Dec 2 05:19:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e. Dec 2 05:19:39 localhost systemd[1]: tmp-crun.DTBkGi.mount: Deactivated successfully. Dec 2 05:19:39 localhost podman[338746]: 2025-12-02 10:19:39.453871882 +0000 UTC m=+0.087470625 container health_status 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 2 05:19:39 localhost systemd[1]: tmp-crun.zPA4sR.mount: Deactivated successfully. Dec 2 05:19:39 localhost podman[338747]: 2025-12-02 10:19:39.508226459 +0000 UTC m=+0.138644387 container health_status 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 2 05:19:39 localhost podman[338746]: 2025-12-02 10:19:39.537659677 +0000 UTC m=+0.171258360 container exec_died 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 2 05:19:39 localhost systemd[1]: 34d8d72f1760791efd6fce92829e22cf2c62a563d913233457df3757a8f0a0cb.service: Deactivated successfully. Dec 2 05:19:39 localhost podman[338748]: 2025-12-02 10:19:39.551861548 +0000 UTC m=+0.178027611 container health_status 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 2 05:19:39 localhost podman[338748]: 2025-12-02 10:19:39.559883123 +0000 UTC m=+0.186049266 container exec_died 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 2 05:19:39 localhost systemd[1]: 89e2cca4d9a47129c03eea427c7a5efb1eb8bdc504c72980bc05f27cf94e4c3e.service: Deactivated successfully. Dec 2 05:19:39 localhost podman[338747]: 2025-12-02 10:19:39.593452882 +0000 UTC m=+0.223870820 container exec_died 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 2 05:19:39 localhost systemd[1]: 6736ec798dce98e500dcdb77911f371920e6e3051f1b61d9ef64ab452cccc2f2.service: Deactivated successfully. Dec 2 05:19:41 localhost systemd[1]: session-75.scope: Deactivated successfully. Dec 2 05:19:41 localhost systemd-logind[757]: Session 75 logged out. Waiting for processes to exit. Dec 2 05:19:41 localhost systemd-logind[757]: Removed session 75. Dec 2 05:19:41 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:41 localhost nova_compute[281854]: 2025-12-02 10:19:41.800 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:41 localhost nova_compute[281854]: 2025-12-02 10:19:41.802 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:43 localhost ceph-mon[298296]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 2 05:19:43 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a. Dec 2 05:19:46 localhost podman[338891]: 2025-12-02 10:19:46.450523647 +0000 UTC m=+0.092617963 container health_status f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 2 05:19:46 localhost podman[338891]: 2025-12-02 10:19:46.466042323 +0000 UTC m=+0.108136709 container exec_died f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 2 05:19:46 localhost systemd[1]: f79637bbe02d299ca23081de92d7b3b5c284700b2ff5ad8463e8d3dfa9fcba8a.service: Deactivated successfully. Dec 2 05:19:46 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.802 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.803 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.803 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.804 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.804 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:46 localhost nova_compute[281854]: 2025-12-02 10:19:46.805 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 2 05:19:48 localhost ceph-mon[298296]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' Dec 2 05:19:51 localhost ceph-mon[298296]: mon.np0005541913@1(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 2 05:19:51 localhost nova_compute[281854]: 2025-12-02 10:19:51.806 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:51 localhost nova_compute[281854]: 2025-12-02 10:19:51.808 281858 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 2 05:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709. Dec 2 05:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782. Dec 2 05:19:52 localhost podman[338911]: 2025-12-02 10:19:52.445829577 +0000 UTC m=+0.081709750 container health_status cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 2 05:19:52 localhost podman[338911]: 2025-12-02 10:19:52.485953062 +0000 UTC m=+0.121833185 container exec_died cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 2 05:19:52 localhost systemd[1]: cb7a1d62ad28be61994a5cce5e8d17f6fcd752ea72af3018f59ac13d978bf782.service: Deactivated successfully. Dec 2 05:19:52 localhost podman[338910]: 2025-12-02 10:19:52.502429704 +0000 UTC m=+0.140067084 container health_status 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 2 05:19:52 localhost podman[338910]: 2025-12-02 10:19:52.538042269 +0000 UTC m=+0.175679619 container exec_died 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 2 05:19:52 localhost systemd[1]: 53cdcad7bb2eb92a4005cb53efa297ce75c0fcc9e15541d6ad164377a8999709.service: Deactivated successfully. Dec 2 05:19:54 localhost sshd[338960]: main: sshd: ssh-rsa algorithm is disabled Dec 2 05:19:54 localhost systemd-logind[757]: New session 76 of user zuul. Dec 2 05:19:54 localhost systemd[1]: Started Session 76 of User zuul.